Test Report: Hyper-V_Windows 20318

                    
                      dd22c410311484da6763aae43511cabe19037b94:2025-01-27:38092
                    
                

Test fail (11/211)

x
+
TestErrorSpam/setup (189.53s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-762000 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 --driver=hyperv
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-762000 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 --driver=hyperv: (3m9.5266844s)
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube VM"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-762000] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
- KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
- MINIKUBE_LOCATION=20318
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-762000" primary control-plane node in "nospam-762000" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-762000" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (189.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-253500 service --namespace=default --https --url hello-node: exit status 1 (15.0151433s)
functional_test.go:1511: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-253500 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-253500 service hello-node --url --format={{.IP}}: exit status 1 (15.0107699s)
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-253500 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1548: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-253500 service hello-node --url: exit status 1 (15.0132546s)
functional_test.go:1561: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-253500 service hello-node --url": exit status 1
functional_test.go:1565: found endpoint for hello-node: 
functional_test.go:1573: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (68.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-011400 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-011400 -- exec busybox-58667487b6-68jl6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-011400 -- exec busybox-58667487b6-68jl6 -- sh -c "ping -c 1 172.29.192.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-011400 -- exec busybox-58667487b6-68jl6 -- sh -c "ping -c 1 172.29.192.1": exit status 1 (10.5280999s)

                                                
                                                
-- stdout --
	PING 172.29.192.1 (172.29.192.1): 56 data bytes
	
	--- 172.29.192.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.29.192.1) from pod (busybox-58667487b6-68jl6): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-011400 -- exec busybox-58667487b6-fzbr5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-011400 -- exec busybox-58667487b6-fzbr5 -- sh -c "ping -c 1 172.29.192.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-011400 -- exec busybox-58667487b6-fzbr5 -- sh -c "ping -c 1 172.29.192.1": exit status 1 (10.478924s)

                                                
                                                
-- stdout --
	PING 172.29.192.1 (172.29.192.1): 56 data bytes
	
	--- 172.29.192.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.29.192.1) from pod (busybox-58667487b6-fzbr5): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-011400 -- exec busybox-58667487b6-qwccg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-011400 -- exec busybox-58667487b6-qwccg -- sh -c "ping -c 1 172.29.192.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-011400 -- exec busybox-58667487b6-qwccg -- sh -c "ping -c 1 172.29.192.1": exit status 1 (10.5059037s)

                                                
                                                
-- stdout --
	PING 172.29.192.1 (172.29.192.1): 56 data bytes
	
	--- 172.29.192.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.29.192.1) from pod (busybox-58667487b6-qwccg): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-011400 -n ha-011400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-011400 -n ha-011400: (12.0835295s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 logs -n 25: (8.9049132s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| image   | functional-253500                    | functional-253500 | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:05 UTC | 27 Jan 25 11:06 UTC |
	|         | image ls --format table              |                   |                   |         |                     |                     |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	| image   | functional-253500 image build -t     | functional-253500 | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:05 UTC | 27 Jan 25 11:06 UTC |
	|         | localhost/my-image:functional-253500 |                   |                   |         |                     |                     |
	|         | testdata\build --alsologtostderr     |                   |                   |         |                     |                     |
	| image   | functional-253500 image ls           | functional-253500 | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:06 UTC | 27 Jan 25 11:06 UTC |
	| delete  | -p functional-253500                 | functional-253500 | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:08 UTC | 27 Jan 25 11:09 UTC |
	| start   | -p ha-011400 --wait=true             | ha-011400         | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:09 UTC | 27 Jan 25 11:20 UTC |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-011400 -- apply -f             | ha-011400         | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:20 UTC | 27 Jan 25 11:20 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-011400 -- rollout status       | ha-011400         | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:20 UTC | 27 Jan 25 11:20 UTC |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-011400 -- get pods -o          | ha-011400         | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:20 UTC | 27 Jan 25 11:20 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-011400 -- get pods -o          | ha-011400         | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:20 UTC | 27 Jan 25 11:20 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-011400 -- exec                 | ha-011400         | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:20 UTC | 27 Jan 25 11:20 UTC |
	|         | busybox-58667487b6-68jl6 --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-011400 -- exec                 | ha-011400         | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:20 UTC | 27 Jan 25 11:20 UTC |
	|         | busybox-58667487b6-fzbr5 --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-011400 -- exec                 | ha-011400         | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:20 UTC | 27 Jan 25 11:20 UTC |
	|         | busybox-58667487b6-qwccg --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-011400 -- exec                 | ha-011400         | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:20 UTC | 27 Jan 25 11:20 UTC |
	|         | busybox-58667487b6-68jl6 --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-011400 -- exec                 | ha-011400         | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:20 UTC | 27 Jan 25 11:20 UTC |
	|         | busybox-58667487b6-fzbr5 --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-011400 -- exec                 | ha-011400         | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:20 UTC | 27 Jan 25 11:20 UTC |
	|         | busybox-58667487b6-qwccg --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-011400 -- exec                 | ha-011400         | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:20 UTC | 27 Jan 25 11:20 UTC |
	|         | busybox-58667487b6-68jl6 -- nslookup |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-011400 -- exec                 | ha-011400         | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:20 UTC | 27 Jan 25 11:20 UTC |
	|         | busybox-58667487b6-fzbr5 -- nslookup |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-011400 -- exec                 | ha-011400         | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:20 UTC | 27 Jan 25 11:20 UTC |
	|         | busybox-58667487b6-qwccg -- nslookup |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-011400 -- get pods -o          | ha-011400         | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:20 UTC | 27 Jan 25 11:20 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-011400 -- exec                 | ha-011400         | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:20 UTC | 27 Jan 25 11:20 UTC |
	|         | busybox-58667487b6-68jl6             |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-011400 -- exec                 | ha-011400         | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:20 UTC |                     |
	|         | busybox-58667487b6-68jl6 -- sh       |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.29.192.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-011400 -- exec                 | ha-011400         | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:21 UTC | 27 Jan 25 11:21 UTC |
	|         | busybox-58667487b6-fzbr5             |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-011400 -- exec                 | ha-011400         | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:21 UTC |                     |
	|         | busybox-58667487b6-fzbr5 -- sh       |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.29.192.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-011400 -- exec                 | ha-011400         | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:21 UTC | 27 Jan 25 11:21 UTC |
	|         | busybox-58667487b6-qwccg             |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-011400 -- exec                 | ha-011400         | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:21 UTC |                     |
	|         | busybox-58667487b6-qwccg -- sh       |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.29.192.1            |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 11:09:07
	Running on machine: minikube6
	Binary: Built with gc go1.23.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 11:09:07.222562    5908 out.go:345] Setting OutFile to fd 1164 ...
	I0127 11:09:07.297679    5908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:09:07.297679    5908 out.go:358] Setting ErrFile to fd 1620...
	I0127 11:09:07.297679    5908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:09:07.319245    5908 out.go:352] Setting JSON to false
	I0127 11:09:07.322311    5908 start.go:129] hostinfo: {"hostname":"minikube6","uptime":439130,"bootTime":1737537016,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5371 Build 19045.5371","kernelVersion":"10.0.19045.5371 Build 19045.5371","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0127 11:09:07.322376    5908 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0127 11:09:07.327670    5908 out.go:177] * [ha-011400] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	I0127 11:09:07.331440    5908 notify.go:220] Checking for updates...
	I0127 11:09:07.333218    5908 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0127 11:09:07.335730    5908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:09:07.339346    5908 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0127 11:09:07.341979    5908 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 11:09:07.344594    5908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:09:07.347542    5908 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:09:12.521964    5908 out.go:177] * Using the hyperv driver based on user configuration
	I0127 11:09:12.526114    5908 start.go:297] selected driver: hyperv
	I0127 11:09:12.526114    5908 start.go:901] validating driver "hyperv" against <nil>
	I0127 11:09:12.526114    5908 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:09:12.572810    5908 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 11:09:12.573584    5908 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 11:09:12.573584    5908 cni.go:84] Creating CNI manager for ""
	I0127 11:09:12.574403    5908 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0127 11:09:12.574403    5908 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0127 11:09:12.574403    5908 start.go:340] cluster config:
	{Name:ha-011400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-011400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0127 11:09:12.575430    5908 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:09:12.580573    5908 out.go:177] * Starting "ha-011400" primary control-plane node in "ha-011400" cluster
	I0127 11:09:12.586108    5908 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0127 11:09:12.586108    5908 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
	I0127 11:09:12.586108    5908 cache.go:56] Caching tarball of preloaded images
	I0127 11:09:12.587433    5908 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 11:09:12.587599    5908 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0127 11:09:12.587599    5908 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\config.json ...
	I0127 11:09:12.588396    5908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\config.json: {Name:mk918c8acba483aadee8de079cb12efb4b886e8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:09:12.589617    5908 start.go:360] acquireMachinesLock for ha-011400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 11:09:12.589617    5908 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-011400"
	I0127 11:09:12.590290    5908 start.go:93] Provisioning new machine with config: &{Name:ha-011400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-011400 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 11:09:12.590327    5908 start.go:125] createHost starting for "" (driver="hyperv")
	I0127 11:09:12.595937    5908 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0127 11:09:12.595937    5908 start.go:159] libmachine.API.Create for "ha-011400" (driver="hyperv")
	I0127 11:09:12.597037    5908 client.go:168] LocalClient.Create starting
	I0127 11:09:12.597298    5908 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0127 11:09:12.597298    5908 main.go:141] libmachine: Decoding PEM data...
	I0127 11:09:12.597817    5908 main.go:141] libmachine: Parsing certificate...
	I0127 11:09:12.597981    5908 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0127 11:09:12.598188    5908 main.go:141] libmachine: Decoding PEM data...
	I0127 11:09:12.598188    5908 main.go:141] libmachine: Parsing certificate...
	I0127 11:09:12.598363    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0127 11:09:14.533010    5908 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0127 11:09:14.533237    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:14.533237    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0127 11:09:16.159131    5908 main.go:141] libmachine: [stdout =====>] : False
	
	I0127 11:09:16.159537    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:16.159658    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0127 11:09:17.626696    5908 main.go:141] libmachine: [stdout =====>] : True
	
	I0127 11:09:17.626696    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:17.626913    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0127 11:09:21.086973    5908 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0127 11:09:21.087597    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:21.090325    5908 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 11:09:21.600112    5908 main.go:141] libmachine: Creating SSH key...
	I0127 11:09:21.807745    5908 main.go:141] libmachine: Creating VM...
	I0127 11:09:21.808091    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0127 11:09:24.541527    5908 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0127 11:09:24.541587    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:24.541816    5908 main.go:141] libmachine: Using switch "Default Switch"
	I0127 11:09:24.541925    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0127 11:09:26.224571    5908 main.go:141] libmachine: [stdout =====>] : True
	
	I0127 11:09:26.225134    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:26.225134    5908 main.go:141] libmachine: Creating VHD
	I0127 11:09:26.225462    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\fixed.vhd' -SizeBytes 10MB -Fixed
	I0127 11:09:29.898213    5908 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : BCF11593-87A7-490B-BD2B-18E7A6434F9B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0127 11:09:29.898213    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:29.898815    5908 main.go:141] libmachine: Writing magic tar header
	I0127 11:09:29.898815    5908 main.go:141] libmachine: Writing SSH key tar header
	I0127 11:09:29.912214    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\disk.vhd' -VHDType Dynamic -DeleteSource
	I0127 11:09:33.013760    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:09:33.014714    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:33.014714    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\disk.vhd' -SizeBytes 20000MB
	I0127 11:09:35.461459    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:09:35.461459    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:35.462473    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-011400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0127 11:09:38.867256    5908 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-011400 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0127 11:09:38.867440    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:38.867474    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-011400 -DynamicMemoryEnabled $false
	I0127 11:09:41.003653    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:09:41.003653    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:41.003653    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-011400 -Count 2
	I0127 11:09:43.052228    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:09:43.052228    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:43.052228    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-011400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\boot2docker.iso'
	I0127 11:09:45.485209    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:09:45.485209    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:45.485209    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-011400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\disk.vhd'
	I0127 11:09:47.961077    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:09:47.961572    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:47.961572    5908 main.go:141] libmachine: Starting VM...
	I0127 11:09:47.961572    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-011400
	I0127 11:09:50.829478    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:09:50.829725    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:50.829725    5908 main.go:141] libmachine: Waiting for host to start...
	I0127 11:09:50.829725    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:09:52.949375    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:09:52.950006    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:52.950006    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:09:55.332754    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:09:55.333360    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:56.333756    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:09:58.463211    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:09:58.463211    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:58.463211    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:10:00.985686    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:10:00.985686    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:01.986228    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:10:04.124938    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:10:04.124938    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:04.124938    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:10:06.500200    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:10:06.500200    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:07.500809    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:10:09.595674    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:10:09.595898    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:09.595969    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:10:11.986358    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:10:11.986409    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:12.987239    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:10:15.105667    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:10:15.105667    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:15.106630    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:10:17.598066    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:10:17.598066    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:17.598825    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:10:19.605367    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:10:19.606369    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:19.606399    5908 machine.go:93] provisionDockerMachine start ...
	I0127 11:10:19.606529    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:10:21.613169    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:10:21.613169    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:21.613429    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:10:24.000865    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:10:24.000865    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:24.006563    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:10:24.020134    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.192.249 22 <nil> <nil>}
	I0127 11:10:24.020134    5908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 11:10:24.157564    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 11:10:24.157564    5908 buildroot.go:166] provisioning hostname "ha-011400"
	I0127 11:10:24.157711    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:10:26.143087    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:10:26.143723    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:26.143723    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:10:28.551460    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:10:28.551537    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:28.556127    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:10:28.556864    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.192.249 22 <nil> <nil>}
	I0127 11:10:28.556864    5908 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-011400 && echo "ha-011400" | sudo tee /etc/hostname
	I0127 11:10:28.713391    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-011400
	
	I0127 11:10:28.713391    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:10:30.757638    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:10:30.758661    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:30.758661    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:10:33.184878    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:10:33.185376    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:33.190808    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:10:33.191539    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.192.249 22 <nil> <nil>}
	I0127 11:10:33.191539    5908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-011400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-011400/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-011400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 11:10:33.350859    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:10:33.350966    5908 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0127 11:10:33.351080    5908 buildroot.go:174] setting up certificates
	I0127 11:10:33.351109    5908 provision.go:84] configureAuth start
	I0127 11:10:33.351109    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:10:35.346492    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:10:35.346718    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:35.346825    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:10:37.776994    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:10:37.776994    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:37.777199    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:10:39.814583    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:10:39.814583    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:39.814583    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:10:42.188395    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:10:42.188442    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:42.188442    5908 provision.go:143] copyHostCerts
	I0127 11:10:42.188442    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0127 11:10:42.188962    5908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0127 11:10:42.188962    5908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0127 11:10:42.189335    5908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0127 11:10:42.190630    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0127 11:10:42.190875    5908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0127 11:10:42.190953    5908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0127 11:10:42.191410    5908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0127 11:10:42.192665    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0127 11:10:42.192842    5908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0127 11:10:42.192842    5908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0127 11:10:42.193046    5908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0127 11:10:42.194380    5908 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-011400 san=[127.0.0.1 172.29.192.249 ha-011400 localhost minikube]
	I0127 11:10:42.317687    5908 provision.go:177] copyRemoteCerts
	I0127 11:10:42.326827    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 11:10:42.326827    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:10:44.406944    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:10:44.406944    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:44.406944    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:10:46.794685    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:10:46.794685    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:46.795761    5908 sshutil.go:53] new ssh client: &{IP:172.29.192.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\id_rsa Username:docker}
	I0127 11:10:46.900156    5908 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5732808s)
	I0127 11:10:46.900156    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0127 11:10:46.900156    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 11:10:46.939905    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0127 11:10:46.940388    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0127 11:10:46.980857    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0127 11:10:46.981981    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 11:10:47.030423    5908 provision.go:87] duration metric: took 13.6791718s to configureAuth
	I0127 11:10:47.030423    5908 buildroot.go:189] setting minikube options for container-runtime
	I0127 11:10:47.031620    5908 config.go:182] Loaded profile config "ha-011400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 11:10:47.031620    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:10:49.098432    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:10:49.098432    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:49.099137    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:10:51.537040    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:10:51.537870    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:51.543074    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:10:51.543734    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.192.249 22 <nil> <nil>}
	I0127 11:10:51.543734    5908 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0127 11:10:51.684554    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0127 11:10:51.684658    5908 buildroot.go:70] root file system type: tmpfs
	I0127 11:10:51.684885    5908 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0127 11:10:51.684973    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:10:53.683836    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:10:53.683836    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:53.683929    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:10:56.063779    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:10:56.064517    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:56.070083    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:10:56.070871    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.192.249 22 <nil> <nil>}
	I0127 11:10:56.070871    5908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0127 11:10:56.238896    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0127 11:10:56.238964    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:10:58.225110    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:10:58.225305    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:58.225602    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:11:00.693976    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:11:00.694043    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:00.697892    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:11:00.699277    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.192.249 22 <nil> <nil>}
	I0127 11:11:00.699277    5908 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0127 11:11:02.953410    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0127 11:11:02.953410    5908 machine.go:96] duration metric: took 43.3465606s to provisionDockerMachine
	I0127 11:11:02.953410    5908 client.go:171] duration metric: took 1m50.3552256s to LocalClient.Create
	I0127 11:11:02.953410    5908 start.go:167] duration metric: took 1m50.356326s to libmachine.API.Create "ha-011400"
	I0127 11:11:02.953410    5908 start.go:293] postStartSetup for "ha-011400" (driver="hyperv")
	I0127 11:11:02.953410    5908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:11:02.965727    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:11:02.965727    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:11:05.169331    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:11:05.169331    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:05.169331    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:11:07.562887    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:11:07.563268    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:07.563817    5908 sshutil.go:53] new ssh client: &{IP:172.29.192.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\id_rsa Username:docker}
	I0127 11:11:07.668017    5908 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7022405s)
	I0127 11:11:07.684358    5908 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:11:07.689985    5908 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 11:11:07.689985    5908 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0127 11:11:07.689985    5908 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0127 11:11:07.691847    5908 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> 59562.pem in /etc/ssl/certs
	I0127 11:11:07.691957    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> /etc/ssl/certs/59562.pem
	I0127 11:11:07.702578    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 11:11:07.719676    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem --> /etc/ssl/certs/59562.pem (1708 bytes)
	I0127 11:11:07.761240    5908 start.go:296] duration metric: took 4.8077797s for postStartSetup
	I0127 11:11:07.764102    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:11:09.759814    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:11:09.759860    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:09.759928    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:11:12.132766    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:11:12.133212    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:12.133438    5908 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\config.json ...
	I0127 11:11:12.137229    5908 start.go:128] duration metric: took 1m59.5455659s to createHost
	I0127 11:11:12.137355    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:11:14.139463    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:11:14.140224    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:14.140224    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:11:16.561413    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:11:16.561648    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:16.566654    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:11:16.567175    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.192.249 22 <nil> <nil>}
	I0127 11:11:16.567434    5908 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 11:11:16.695667    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737976276.707150979
	
	I0127 11:11:16.695667    5908 fix.go:216] guest clock: 1737976276.707150979
	I0127 11:11:16.695667    5908 fix.go:229] Guest: 2025-01-27 11:11:16.707150979 +0000 UTC Remote: 2025-01-27 11:11:12.1372298 +0000 UTC m=+124.999711201 (delta=4.569921179s)
	I0127 11:11:16.695879    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:11:18.772791    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:11:18.773401    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:18.773401    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:11:21.223695    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:11:21.224339    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:21.229544    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:11:21.230235    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.192.249 22 <nil> <nil>}
	I0127 11:11:21.230235    5908 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1737976276
	I0127 11:11:21.382762    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 27 11:11:16 UTC 2025
	
	I0127 11:11:21.382848    5908 fix.go:236] clock set: Mon Jan 27 11:11:16 UTC 2025
	 (err=<nil>)
	I0127 11:11:21.382848    5908 start.go:83] releasing machines lock for "ha-011400", held for 2m8.7913742s
	I0127 11:11:21.383022    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:11:23.389730    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:11:23.390549    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:23.390636    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:11:25.787606    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:11:25.787655    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:25.791275    5908 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0127 11:11:25.791275    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:11:25.804128    5908 ssh_runner.go:195] Run: cat /version.json
	I0127 11:11:25.804793    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:11:27.990178    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:11:27.990975    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:27.991051    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:11:28.027708    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:11:28.027708    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:28.027875    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:11:30.588051    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:11:30.588107    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:30.588107    5908 sshutil.go:53] new ssh client: &{IP:172.29.192.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\id_rsa Username:docker}
	I0127 11:11:30.610750    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:11:30.610750    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:30.611355    5908 sshutil.go:53] new ssh client: &{IP:172.29.192.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\id_rsa Username:docker}
	I0127 11:11:30.690624    5908 ssh_runner.go:235] Completed: cat /version.json: (4.886446s)
	I0127 11:11:30.702245    5908 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9109194s)
	W0127 11:11:30.702245    5908 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0127 11:11:30.702938    5908 ssh_runner.go:195] Run: systemctl --version
	I0127 11:11:30.721424    5908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 11:11:30.729429    5908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 11:11:30.739793    5908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:11:30.766360    5908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 11:11:30.766360    5908 start.go:495] detecting cgroup driver to use...
	I0127 11:11:30.766742    5908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:11:30.811395    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 11:11:30.842288    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0127 11:11:30.848317    5908 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0127 11:11:30.848454    5908 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0127 11:11:30.864266    5908 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 11:11:30.876184    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 11:11:30.905910    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 11:11:30.933956    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 11:11:30.960270    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 11:11:30.988350    5908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:11:31.019360    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 11:11:31.053690    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 11:11:31.088425    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 11:11:31.116823    5908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:11:31.133902    5908 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 11:11:31.145998    5908 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 11:11:31.181685    5908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:11:31.213595    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:11:31.426063    5908 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 11:11:31.459596    5908 start.go:495] detecting cgroup driver to use...
	I0127 11:11:31.470026    5908 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0127 11:11:31.501795    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:11:31.534125    5908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 11:11:31.576708    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:11:31.608353    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 11:11:31.640956    5908 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 11:11:31.707701    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 11:11:31.728970    5908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:11:31.771490    5908 ssh_runner.go:195] Run: which cri-dockerd
	I0127 11:11:31.792650    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0127 11:11:31.810007    5908 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0127 11:11:31.852912    5908 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0127 11:11:32.051359    5908 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0127 11:11:32.234576    5908 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0127 11:11:32.234897    5908 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0127 11:11:32.279287    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:11:32.482928    5908 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0127 11:11:35.078528    5908 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5954283s)
	I0127 11:11:35.089502    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0127 11:11:35.125404    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0127 11:11:35.159439    5908 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0127 11:11:35.364144    5908 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0127 11:11:35.564596    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:11:35.748802    5908 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0127 11:11:35.786793    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0127 11:11:35.816974    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:11:36.006247    5908 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0127 11:11:36.099786    5908 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0127 11:11:36.111632    5908 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0127 11:11:36.119620    5908 start.go:563] Will wait 60s for crictl version
	I0127 11:11:36.129364    5908 ssh_runner.go:195] Run: which crictl
	I0127 11:11:36.145183    5908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:11:36.196286    5908 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0127 11:11:36.205610    5908 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 11:11:36.248769    5908 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 11:11:36.299988    5908 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0127 11:11:36.299988    5908 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0127 11:11:36.303987    5908 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0127 11:11:36.303987    5908 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0127 11:11:36.303987    5908 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0127 11:11:36.303987    5908 ip.go:211] Found interface: {Index:17 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:43:05:a6 Flags:up|broadcast|multicast|running}
	I0127 11:11:36.307032    5908 ip.go:214] interface addr: fe80::8ceb:a58b:811a:7c79/64
	I0127 11:11:36.307032    5908 ip.go:214] interface addr: 172.29.192.1/20
	I0127 11:11:36.316048    5908 ssh_runner.go:195] Run: grep 172.29.192.1	host.minikube.internal$ /etc/hosts
	I0127 11:11:36.323051    5908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:11:36.354737    5908 kubeadm.go:883] updating cluster {Name:ha-011400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-011400 Namespace:default APIServerHAVIP
:172.29.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.192.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 11:11:36.355727    5908 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0127 11:11:36.362824    5908 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0127 11:11:36.386228    5908 docker.go:689] Got preloaded images: 
	I0127 11:11:36.386228    5908 docker.go:695] registry.k8s.io/kube-apiserver:v1.32.1 wasn't preloaded
	I0127 11:11:36.396856    5908 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0127 11:11:36.424471    5908 ssh_runner.go:195] Run: which lz4
	I0127 11:11:36.429988    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0127 11:11:36.440927    5908 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 11:11:36.446457    5908 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 11:11:36.446736    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (349810983 bytes)
	I0127 11:11:38.441566    5908 docker.go:653] duration metric: took 2.0112085s to copy over tarball
	I0127 11:11:38.454490    5908 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 11:11:46.790497    5908 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.3359203s)
	I0127 11:11:46.790497    5908 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 11:11:46.848768    5908 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0127 11:11:46.867274    5908 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0127 11:11:46.909862    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:11:47.100078    5908 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0127 11:11:50.401617    5908 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3013415s)
	I0127 11:11:50.410397    5908 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0127 11:11:50.435484    5908 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0127 11:11:50.435647    5908 cache_images.go:84] Images are preloaded, skipping loading
	I0127 11:11:50.435647    5908 kubeadm.go:934] updating node { 172.29.192.249 8443 v1.32.1 docker true true} ...
	I0127 11:11:50.435969    5908 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-011400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.192.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:ha-011400 Namespace:default APIServerHAVIP:172.29.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 11:11:50.444978    5908 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0127 11:11:50.505231    5908 cni.go:84] Creating CNI manager for ""
	I0127 11:11:50.505280    5908 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0127 11:11:50.505280    5908 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 11:11:50.505335    5908 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.29.192.249 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-011400 NodeName:ha-011400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.29.192.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.29.192.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 11:11:50.505375    5908 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.29.192.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-011400"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.29.192.249"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.29.192.249"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 11:11:50.505375    5908 kube-vip.go:115] generating kube-vip config ...
	I0127 11:11:50.516038    5908 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0127 11:11:50.542312    5908 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0127 11:11:50.542450    5908 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.29.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.9
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0127 11:11:50.552753    5908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 11:11:50.567451    5908 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 11:11:50.579484    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0127 11:11:50.596884    5908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0127 11:11:50.627481    5908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:11:50.656126    5908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0127 11:11:50.685796    5908 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0127 11:11:50.724528    5908 ssh_runner.go:195] Run: grep 172.29.207.254	control-plane.minikube.internal$ /etc/hosts
	I0127 11:11:50.730535    5908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:11:50.761623    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:11:50.939221    5908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:11:50.965161    5908 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400 for IP: 172.29.192.249
	I0127 11:11:50.965161    5908 certs.go:194] generating shared ca certs ...
	I0127 11:11:50.965308    5908 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:11:50.965963    5908 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0127 11:11:50.966485    5908 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0127 11:11:50.966731    5908 certs.go:256] generating profile certs ...
	I0127 11:11:50.967351    5908 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\client.key
	I0127 11:11:50.967351    5908 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\client.crt with IP's: []
	I0127 11:11:51.134209    5908 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\client.crt ...
	I0127 11:11:51.134209    5908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\client.crt: {Name:mkba84c6952d76a5735a9db83ce4c4badf7ffeb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:11:51.135583    5908 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\client.key ...
	I0127 11:11:51.135583    5908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\client.key: {Name:mke75589f2e06ab48fc67ae6f019dea0ee774b04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:11:51.137017    5908 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key.f8025a70
	I0127 11:11:51.137017    5908 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt.f8025a70 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.29.192.249 172.29.207.254]
	I0127 11:11:51.201513    5908 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt.f8025a70 ...
	I0127 11:11:51.201513    5908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt.f8025a70: {Name:mkb4d8925a0047dcb0da4f5c22cc0bf9458620c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:11:51.202610    5908 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key.f8025a70 ...
	I0127 11:11:51.202610    5908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key.f8025a70: {Name:mk5499447aca49b42a042f12c2ffd4a4e3eee915 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:11:51.203623    5908 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt.f8025a70 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt
	I0127 11:11:51.218217    5908 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key.f8025a70 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key
	I0127 11:11:51.220322    5908 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.key
	I0127 11:11:51.220566    5908 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.crt with IP's: []
	I0127 11:11:51.412816    5908 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.crt ...
	I0127 11:11:51.412816    5908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.crt: {Name:mkf67e1f2becfa1a0326341caca64d6a4aa03284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:11:51.415060    5908 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.key ...
	I0127 11:11:51.415060    5908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.key: {Name:mk5bf10f49157fce23a6fa1649fd2e473d0f78e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:11:51.415930    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0127 11:11:51.416532    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0127 11:11:51.416728    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0127 11:11:51.416896    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0127 11:11:51.416896    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0127 11:11:51.416896    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0127 11:11:51.416896    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0127 11:11:51.429175    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0127 11:11:51.430375    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem (1338 bytes)
	W0127 11:11:51.431164    5908 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956_empty.pem, impossibly tiny 0 bytes
	I0127 11:11:51.431164    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0127 11:11:51.431498    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0127 11:11:51.431905    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0127 11:11:51.431905    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0127 11:11:51.432650    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem (1708 bytes)
	I0127 11:11:51.433001    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem -> /usr/share/ca-certificates/5956.pem
	I0127 11:11:51.433180    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> /usr/share/ca-certificates/59562.pem
	I0127 11:11:51.433343    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:11:51.433500    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:11:51.476702    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 11:11:51.519341    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:11:51.561983    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 11:11:51.610569    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 11:11:51.657176    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 11:11:51.706019    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:11:51.747867    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 11:11:51.789263    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem --> /usr/share/ca-certificates/5956.pem (1338 bytes)
	I0127 11:11:51.829637    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem --> /usr/share/ca-certificates/59562.pem (1708 bytes)
	I0127 11:11:51.870789    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:11:51.913415    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 11:11:51.959158    5908 ssh_runner.go:195] Run: openssl version
	I0127 11:11:51.978863    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:11:52.007768    5908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:11:52.015516    5908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:11:52.025877    5908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:11:52.045885    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:11:52.073834    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5956.pem && ln -fs /usr/share/ca-certificates/5956.pem /etc/ssl/certs/5956.pem"
	I0127 11:11:52.101764    5908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5956.pem
	I0127 11:11:52.108405    5908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:52 /usr/share/ca-certificates/5956.pem
	I0127 11:11:52.118376    5908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5956.pem
	I0127 11:11:52.136914    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5956.pem /etc/ssl/certs/51391683.0"
	I0127 11:11:52.166813    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59562.pem && ln -fs /usr/share/ca-certificates/59562.pem /etc/ssl/certs/59562.pem"
	I0127 11:11:52.195868    5908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59562.pem
	I0127 11:11:52.202178    5908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:52 /usr/share/ca-certificates/59562.pem
	I0127 11:11:52.212975    5908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59562.pem
	I0127 11:11:52.231370    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59562.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:11:52.260852    5908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:11:52.268071    5908 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 11:11:52.268405    5908 kubeadm.go:392] StartCluster: {Name:ha-011400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-011400 Namespace:default APIServerHAVIP:17
2.29.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.192.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:11:52.276725    5908 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0127 11:11:52.310633    5908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 11:11:52.338469    5908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:11:52.363347    5908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:11:52.379311    5908 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:11:52.379311    5908 kubeadm.go:157] found existing configuration files:
	
	I0127 11:11:52.389376    5908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:11:52.411728    5908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:11:52.426114    5908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:11:52.457965    5908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:11:52.478921    5908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:11:52.490610    5908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:11:52.520032    5908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:11:52.541817    5908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:11:52.553672    5908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:11:52.583006    5908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:11:52.601375    5908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:11:52.614820    5908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:11:52.638562    5908 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:11:52.879181    5908 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 11:11:52.879392    5908 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:11:53.031198    5908 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:11:53.031539    5908 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:11:53.031539    5908 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 11:11:53.053180    5908 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:11:53.056434    5908 out.go:235]   - Generating certificates and keys ...
	I0127 11:11:53.056695    5908 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:11:53.056695    5908 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:11:53.272424    5908 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 11:11:53.670890    5908 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 11:11:53.819417    5908 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 11:11:53.998510    5908 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 11:11:54.249756    5908 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 11:11:54.250161    5908 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-011400 localhost] and IPs [172.29.192.249 127.0.0.1 ::1]
	I0127 11:11:54.306093    5908 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 11:11:54.306468    5908 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-011400 localhost] and IPs [172.29.192.249 127.0.0.1 ::1]
	I0127 11:11:54.553398    5908 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 11:11:55.127728    5908 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 11:11:55.547148    5908 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 11:11:55.549135    5908 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:11:55.816743    5908 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:11:55.970494    5908 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 11:11:56.103571    5908 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:11:56.670631    5908 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:11:57.002691    5908 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:11:57.003985    5908 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:11:57.007311    5908 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:11:57.013123    5908 out.go:235]   - Booting up control plane ...
	I0127 11:11:57.013426    5908 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:11:57.013606    5908 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:11:57.013776    5908 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:11:57.038786    5908 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:11:57.047306    5908 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:11:57.047439    5908 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:11:57.252031    5908 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 11:11:57.252466    5908 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 11:11:58.253645    5908 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002154367s
	I0127 11:11:58.253711    5908 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 11:12:07.188471    5908 kubeadm.go:310] [api-check] The API server is healthy after 8.93489925s
	I0127 11:12:07.209946    5908 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 11:12:07.239557    5908 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 11:12:07.280863    5908 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 11:12:07.280863    5908 kubeadm.go:310] [mark-control-plane] Marking the node ha-011400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 11:12:07.297437    5908 kubeadm.go:310] [bootstrap-token] Using token: 7oks3g.btlejrxbw13gzxd7
	I0127 11:12:07.300662    5908 out.go:235]   - Configuring RBAC rules ...
	I0127 11:12:07.301245    5908 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 11:12:07.311249    5908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 11:12:07.332069    5908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 11:12:07.342128    5908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 11:12:07.353406    5908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 11:12:07.366456    5908 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 11:12:07.600176    5908 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 11:12:08.076410    5908 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 11:12:08.602413    5908 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 11:12:08.603774    5908 kubeadm.go:310] 
	I0127 11:12:08.604639    5908 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 11:12:08.604728    5908 kubeadm.go:310] 
	I0127 11:12:08.605066    5908 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 11:12:08.605066    5908 kubeadm.go:310] 
	I0127 11:12:08.605160    5908 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 11:12:08.605378    5908 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 11:12:08.605530    5908 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 11:12:08.605591    5908 kubeadm.go:310] 
	I0127 11:12:08.605697    5908 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 11:12:08.605697    5908 kubeadm.go:310] 
	I0127 11:12:08.605697    5908 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 11:12:08.605697    5908 kubeadm.go:310] 
	I0127 11:12:08.605697    5908 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 11:12:08.606328    5908 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 11:12:08.606696    5908 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 11:12:08.606743    5908 kubeadm.go:310] 
	I0127 11:12:08.606998    5908 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 11:12:08.606998    5908 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 11:12:08.606998    5908 kubeadm.go:310] 
	I0127 11:12:08.606998    5908 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7oks3g.btlejrxbw13gzxd7 \
	I0127 11:12:08.607669    5908 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7b53ba02e26824ededdf08178373023b65bda8005ddad46edfe91cb6a3cb8d3f \
	I0127 11:12:08.607787    5908 kubeadm.go:310] 	--control-plane 
	I0127 11:12:08.607787    5908 kubeadm.go:310] 
	I0127 11:12:08.608014    5908 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 11:12:08.608014    5908 kubeadm.go:310] 
	I0127 11:12:08.608014    5908 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7oks3g.btlejrxbw13gzxd7 \
	I0127 11:12:08.608605    5908 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7b53ba02e26824ededdf08178373023b65bda8005ddad46edfe91cb6a3cb8d3f 
	I0127 11:12:08.610642    5908 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:12:08.610701    5908 cni.go:84] Creating CNI manager for ""
	I0127 11:12:08.610766    5908 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0127 11:12:08.615658    5908 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0127 11:12:08.628648    5908 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0127 11:12:08.636828    5908 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0127 11:12:08.636971    5908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0127 11:12:08.678902    5908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0127 11:12:09.351264    5908 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 11:12:09.363240    5908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-011400 minikube.k8s.io/updated_at=2025_01_27T11_12_09_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650 minikube.k8s.io/name=ha-011400 minikube.k8s.io/primary=true
	I0127 11:12:09.364238    5908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:12:09.375371    5908 ops.go:34] apiserver oom_adj: -16
	I0127 11:12:09.602364    5908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:12:10.101790    5908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:12:10.604489    5908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:12:11.104480    5908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:12:11.229565    5908 kubeadm.go:1113] duration metric: took 1.8782808s to wait for elevateKubeSystemPrivileges
	I0127 11:12:11.229565    5908 kubeadm.go:394] duration metric: took 18.9609621s to StartCluster
	I0127 11:12:11.229565    5908 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:12:11.229565    5908 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0127 11:12:11.233122    5908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:12:11.235479    5908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 11:12:11.235479    5908 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.29.192.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 11:12:11.235479    5908 start.go:241] waiting for startup goroutines ...
	I0127 11:12:11.235479    5908 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 11:12:11.235790    5908 addons.go:69] Setting storage-provisioner=true in profile "ha-011400"
	I0127 11:12:11.235790    5908 addons.go:69] Setting default-storageclass=true in profile "ha-011400"
	I0127 11:12:11.235844    5908 addons.go:238] Setting addon storage-provisioner=true in "ha-011400"
	I0127 11:12:11.235844    5908 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-011400"
	I0127 11:12:11.236153    5908 host.go:66] Checking if "ha-011400" exists ...
	I0127 11:12:11.236153    5908 config.go:182] Loaded profile config "ha-011400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 11:12:11.236903    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:12:11.237572    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:12:11.379803    5908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.29.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 11:12:11.784071    5908 start.go:971] {"host.minikube.internal": 172.29.192.1} host record injected into CoreDNS's ConfigMap
	I0127 11:12:13.523490    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:12:13.523490    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:13.523810    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:12:13.524293    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:13.524824    5908 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0127 11:12:13.525707    5908 kapi.go:59] client config for ha-011400: &rest.Config{Host:"https://172.29.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-011400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-011400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x301e420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 11:12:13.527326    5908 cert_rotation.go:140] Starting client certificate rotation controller
	I0127 11:12:13.527682    5908 addons.go:238] Setting addon default-storageclass=true in "ha-011400"
	I0127 11:12:13.527829    5908 host.go:66] Checking if "ha-011400" exists ...
	I0127 11:12:13.528743    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:12:13.529451    5908 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:12:13.532832    5908 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:12:13.532832    5908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 11:12:13.532832    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:12:15.837462    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:12:15.837522    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:15.837582    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:12:15.972999    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:12:15.973963    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:15.974023    5908 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 11:12:15.974113    5908 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 11:12:15.974208    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:12:18.269334    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:12:18.269334    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:18.269334    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:12:18.558604    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:12:18.558604    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:18.559608    5908 sshutil.go:53] new ssh client: &{IP:172.29.192.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\id_rsa Username:docker}
	I0127 11:12:18.723181    5908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:12:20.716378    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:12:20.716464    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:20.716588    5908 sshutil.go:53] new ssh client: &{IP:172.29.192.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\id_rsa Username:docker}
	I0127 11:12:20.845054    5908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 11:12:20.998905    5908 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0127 11:12:20.998979    5908 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0127 11:12:20.998979    5908 round_trippers.go:463] GET https://172.29.207.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0127 11:12:20.998979    5908 round_trippers.go:469] Request Headers:
	I0127 11:12:20.998979    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:12:20.998979    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:12:21.012855    5908 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0127 11:12:21.013714    5908 round_trippers.go:463] PUT https://172.29.207.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0127 11:12:21.013714    5908 round_trippers.go:469] Request Headers:
	I0127 11:12:21.013714    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:12:21.013714    5908 round_trippers.go:473]     Content-Type: application/json
	I0127 11:12:21.013714    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:12:21.018345    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:12:21.021152    5908 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 11:12:21.025196    5908 addons.go:514] duration metric: took 9.7896147s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0127 11:12:21.025196    5908 start.go:246] waiting for cluster config update ...
	I0127 11:12:21.025196    5908 start.go:255] writing updated cluster config ...
	I0127 11:12:21.029600    5908 out.go:201] 
	I0127 11:12:21.049748    5908 config.go:182] Loaded profile config "ha-011400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 11:12:21.049847    5908 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\config.json ...
	I0127 11:12:21.059069    5908 out.go:177] * Starting "ha-011400-m02" control-plane node in "ha-011400" cluster
	I0127 11:12:21.061168    5908 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0127 11:12:21.061751    5908 cache.go:56] Caching tarball of preloaded images
	I0127 11:12:21.061970    5908 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 11:12:21.062457    5908 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0127 11:12:21.062498    5908 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\config.json ...
	I0127 11:12:21.068754    5908 start.go:360] acquireMachinesLock for ha-011400-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 11:12:21.068754    5908 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-011400-m02"
	I0127 11:12:21.069404    5908 start.go:93] Provisioning new machine with config: &{Name:ha-011400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-011400 Namespace:def
ault APIServerHAVIP:172.29.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.192.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 11:12:21.069404    5908 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0127 11:12:21.072587    5908 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0127 11:12:21.073487    5908 start.go:159] libmachine.API.Create for "ha-011400" (driver="hyperv")
	I0127 11:12:21.073487    5908 client.go:168] LocalClient.Create starting
	I0127 11:12:21.073897    5908 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0127 11:12:21.073897    5908 main.go:141] libmachine: Decoding PEM data...
	I0127 11:12:21.074374    5908 main.go:141] libmachine: Parsing certificate...
	I0127 11:12:21.074536    5908 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0127 11:12:21.074779    5908 main.go:141] libmachine: Decoding PEM data...
	I0127 11:12:21.074779    5908 main.go:141] libmachine: Parsing certificate...
	I0127 11:12:21.074779    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0127 11:12:22.883268    5908 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0127 11:12:22.883268    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:22.883533    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0127 11:12:24.543408    5908 main.go:141] libmachine: [stdout =====>] : False
	
	I0127 11:12:24.543632    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:24.543632    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0127 11:12:25.993825    5908 main.go:141] libmachine: [stdout =====>] : True
	
	I0127 11:12:25.993825    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:25.994193    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0127 11:12:29.578002    5908 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0127 11:12:29.578002    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:29.580769    5908 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 11:12:30.099725    5908 main.go:141] libmachine: Creating SSH key...
	I0127 11:12:30.247062    5908 main.go:141] libmachine: Creating VM...
	I0127 11:12:30.247062    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0127 11:12:33.149678    5908 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0127 11:12:33.149785    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:33.149846    5908 main.go:141] libmachine: Using switch "Default Switch"
	I0127 11:12:33.149915    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0127 11:12:34.872870    5908 main.go:141] libmachine: [stdout =====>] : True
	
	I0127 11:12:34.873031    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:34.873031    5908 main.go:141] libmachine: Creating VHD
	I0127 11:12:34.873130    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0127 11:12:38.547633    5908 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 53CB5B06-04D8-4770-9AFB-1386F250ED69
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0127 11:12:38.547633    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:38.548220    5908 main.go:141] libmachine: Writing magic tar header
	I0127 11:12:38.548220    5908 main.go:141] libmachine: Writing SSH key tar header
	I0127 11:12:38.561142    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0127 11:12:41.653001    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:12:41.653331    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:41.653387    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m02\disk.vhd' -SizeBytes 20000MB
	I0127 11:12:44.129887    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:12:44.130662    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:44.130662    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-011400-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0127 11:12:47.648667    5908 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-011400-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0127 11:12:47.649483    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:47.649483    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-011400-m02 -DynamicMemoryEnabled $false
	I0127 11:12:49.801045    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:12:49.801045    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:49.801128    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-011400-m02 -Count 2
	I0127 11:12:51.939264    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:12:51.939264    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:51.940263    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-011400-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m02\boot2docker.iso'
	I0127 11:12:54.399753    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:12:54.400441    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:54.400533    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-011400-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m02\disk.vhd'
	I0127 11:12:56.975703    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:12:56.976530    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:56.976530    5908 main.go:141] libmachine: Starting VM...
	I0127 11:12:56.976530    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-011400-m02
	I0127 11:12:59.988152    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:12:59.988495    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:59.988495    5908 main.go:141] libmachine: Waiting for host to start...
	I0127 11:12:59.988495    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:13:02.211357    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:13:02.211357    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:02.211357    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:13:04.696487    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:13:04.696566    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:05.697408    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:13:07.895002    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:13:07.895002    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:07.895002    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:13:10.370120    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:13:10.370120    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:11.371452    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:13:13.530152    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:13:13.530152    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:13.530152    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:13:16.007157    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:13:16.007157    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:17.007910    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:13:19.177236    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:13:19.177301    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:19.177370    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:13:21.661297    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:13:21.661297    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:22.663087    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:13:24.875965    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:13:24.875965    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:24.876103    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:13:27.481388    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:13:27.481578    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:27.481659    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:13:29.530717    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:13:29.531275    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:29.531275    5908 machine.go:93] provisionDockerMachine start ...
	I0127 11:13:29.531375    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:13:31.633931    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:13:31.633931    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:31.633931    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:13:34.176933    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:13:34.176983    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:34.182221    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:13:34.198242    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.195.173 22 <nil> <nil>}
	I0127 11:13:34.198339    5908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 11:13:34.328627    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 11:13:34.328735    5908 buildroot.go:166] provisioning hostname "ha-011400-m02"
	I0127 11:13:34.328735    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:13:36.376031    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:13:36.376031    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:36.376143    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:13:38.805787    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:13:38.805787    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:38.812607    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:13:38.813341    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.195.173 22 <nil> <nil>}
	I0127 11:13:38.813341    5908 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-011400-m02 && echo "ha-011400-m02" | sudo tee /etc/hostname
	I0127 11:13:38.968354    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-011400-m02
	
	I0127 11:13:38.968456    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:13:41.001977    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:13:41.002840    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:41.002840    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:13:43.452505    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:13:43.452505    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:43.457496    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:13:43.458191    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.195.173 22 <nil> <nil>}
	I0127 11:13:43.458191    5908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-011400-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-011400-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-011400-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 11:13:43.596906    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:13:43.596906    5908 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0127 11:13:43.596906    5908 buildroot.go:174] setting up certificates
	I0127 11:13:43.596906    5908 provision.go:84] configureAuth start
	I0127 11:13:43.596906    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:13:45.703466    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:13:45.704485    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:45.704534    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:13:48.179855    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:13:48.180196    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:48.180297    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:13:50.301544    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:13:50.301544    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:50.301544    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:13:52.744076    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:13:52.744383    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:52.744445    5908 provision.go:143] copyHostCerts
	I0127 11:13:52.744445    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0127 11:13:52.744445    5908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0127 11:13:52.744445    5908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0127 11:13:52.745229    5908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0127 11:13:52.746610    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0127 11:13:52.746761    5908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0127 11:13:52.746761    5908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0127 11:13:52.747290    5908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0127 11:13:52.748035    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0127 11:13:52.748571    5908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0127 11:13:52.748657    5908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0127 11:13:52.748985    5908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0127 11:13:52.750012    5908 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-011400-m02 san=[127.0.0.1 172.29.195.173 ha-011400-m02 localhost minikube]
	I0127 11:13:53.033268    5908 provision.go:177] copyRemoteCerts
	I0127 11:13:53.044263    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 11:13:53.044263    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:13:55.090856    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:13:55.090856    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:55.091434    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:13:57.585152    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:13:57.585152    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:57.586557    5908 sshutil.go:53] new ssh client: &{IP:172.29.195.173 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m02\id_rsa Username:docker}
	I0127 11:13:57.688637    5908 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6443256s)
	I0127 11:13:57.688739    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0127 11:13:57.689389    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 11:13:57.735155    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0127 11:13:57.735155    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 11:13:57.779501    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0127 11:13:57.779501    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 11:13:57.826836    5908 provision.go:87] duration metric: took 14.2297823s to configureAuth
	I0127 11:13:57.826836    5908 buildroot.go:189] setting minikube options for container-runtime
	I0127 11:13:57.827429    5908 config.go:182] Loaded profile config "ha-011400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 11:13:57.827429    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:13:59.950211    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:13:59.950211    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:59.950382    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:14:02.461510    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:14:02.461510    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:02.466602    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:14:02.467146    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.195.173 22 <nil> <nil>}
	I0127 11:14:02.467146    5908 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0127 11:14:02.586009    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0127 11:14:02.586009    5908 buildroot.go:70] root file system type: tmpfs
	I0127 11:14:02.586546    5908 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0127 11:14:02.586683    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:14:04.671708    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:14:04.671989    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:04.671989    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:14:07.175168    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:14:07.175168    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:07.181011    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:14:07.181011    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.195.173 22 <nil> <nil>}
	I0127 11:14:07.181616    5908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.29.192.249"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0127 11:14:07.333037    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.29.192.249
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0127 11:14:07.333116    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:14:09.413564    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:14:09.414273    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:09.414434    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:14:11.908031    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:14:11.908031    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:11.913216    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:14:11.913906    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.195.173 22 <nil> <nil>}
	I0127 11:14:11.913906    5908 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0127 11:14:14.152693    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0127 11:14:14.152778    5908 machine.go:96] duration metric: took 44.6210397s to provisionDockerMachine
	I0127 11:14:14.152778    5908 client.go:171] duration metric: took 1m53.0781151s to LocalClient.Create
	I0127 11:14:14.152935    5908 start.go:167] duration metric: took 1m53.0782716s to libmachine.API.Create "ha-011400"
	I0127 11:14:14.152935    5908 start.go:293] postStartSetup for "ha-011400-m02" (driver="hyperv")
	I0127 11:14:14.152935    5908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:14:14.163380    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:14:14.163380    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:14:16.308789    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:14:16.308886    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:16.308979    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:14:18.790253    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:14:18.791258    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:18.791258    5908 sshutil.go:53] new ssh client: &{IP:172.29.195.173 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m02\id_rsa Username:docker}
	I0127 11:14:18.896469    5908 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.733039s)
	I0127 11:14:18.907688    5908 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:14:18.914326    5908 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 11:14:18.914326    5908 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0127 11:14:18.914326    5908 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0127 11:14:18.915588    5908 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> 59562.pem in /etc/ssl/certs
	I0127 11:14:18.915588    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> /etc/ssl/certs/59562.pem
	I0127 11:14:18.925643    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 11:14:18.942978    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem --> /etc/ssl/certs/59562.pem (1708 bytes)
	I0127 11:14:18.984503    5908 start.go:296] duration metric: took 4.8315174s for postStartSetup
	I0127 11:14:18.987055    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:14:21.127908    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:14:21.127908    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:21.127908    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:14:23.541931    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:14:23.543021    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:23.543021    5908 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\config.json ...
	I0127 11:14:23.545720    5908 start.go:128] duration metric: took 2m2.4750415s to createHost
	I0127 11:14:23.545720    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:14:25.652946    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:14:25.653114    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:25.653221    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:14:28.138298    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:14:28.138298    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:28.144154    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:14:28.144777    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.195.173 22 <nil> <nil>}
	I0127 11:14:28.144777    5908 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 11:14:28.265300    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737976468.277162667
	
	I0127 11:14:28.265405    5908 fix.go:216] guest clock: 1737976468.277162667
	I0127 11:14:28.265405    5908 fix.go:229] Guest: 2025-01-27 11:14:28.277162667 +0000 UTC Remote: 2025-01-27 11:14:23.54572 +0000 UTC m=+316.406210801 (delta=4.731442667s)
	I0127 11:14:28.265405    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:14:30.354013    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:14:30.354286    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:30.354286    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:14:32.807954    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:14:32.808172    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:32.813425    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:14:32.814269    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.195.173 22 <nil> <nil>}
	I0127 11:14:32.814269    5908 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1737976468
	I0127 11:14:32.958704    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 27 11:14:28 UTC 2025
	
	I0127 11:14:32.958704    5908 fix.go:236] clock set: Mon Jan 27 11:14:28 UTC 2025
	 (err=<nil>)
	I0127 11:14:32.958704    5908 start.go:83] releasing machines lock for "ha-011400-m02", held for 2m11.8880646s
	I0127 11:14:32.958890    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:14:35.026686    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:14:35.026686    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:35.027003    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:14:37.469776    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:14:37.470427    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:37.473510    5908 out.go:177] * Found network options:
	I0127 11:14:37.476239    5908 out.go:177]   - NO_PROXY=172.29.192.249
	W0127 11:14:37.478437    5908 proxy.go:119] fail to check proxy env: Error ip not in block
	I0127 11:14:37.481021    5908 out.go:177]   - NO_PROXY=172.29.192.249
	W0127 11:14:37.484053    5908 proxy.go:119] fail to check proxy env: Error ip not in block
	W0127 11:14:37.484053    5908 proxy.go:119] fail to check proxy env: Error ip not in block
	I0127 11:14:37.487400    5908 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0127 11:14:37.487400    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:14:37.496455    5908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0127 11:14:37.496455    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:14:39.684042    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:14:39.684820    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:39.684820    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:14:39.699889    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:14:39.699889    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:39.700520    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:14:42.257796    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:14:42.257796    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:42.259655    5908 sshutil.go:53] new ssh client: &{IP:172.29.195.173 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m02\id_rsa Username:docker}
	I0127 11:14:42.284725    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:14:42.284791    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:42.285304    5908 sshutil.go:53] new ssh client: &{IP:172.29.195.173 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m02\id_rsa Username:docker}
	I0127 11:14:42.353969    5908 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8665184s)
	W0127 11:14:42.354046    5908 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0127 11:14:42.371984    5908 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.8754784s)
	W0127 11:14:42.372067    5908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 11:14:42.383144    5908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:14:42.416772    5908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 11:14:42.416772    5908 start.go:495] detecting cgroup driver to use...
	I0127 11:14:42.416772    5908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:14:42.467809    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	W0127 11:14:42.477030    5908 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0127 11:14:42.477030    5908 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0127 11:14:42.504896    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 11:14:42.525153    5908 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 11:14:42.536185    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 11:14:42.566795    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 11:14:42.598027    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 11:14:42.628581    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 11:14:42.658239    5908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:14:42.687286    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 11:14:42.714325    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 11:14:42.743149    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 11:14:42.778199    5908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:14:42.799580    5908 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 11:14:42.812647    5908 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 11:14:42.842140    5908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:14:42.866189    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:14:43.056639    5908 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 11:14:43.089996    5908 start.go:495] detecting cgroup driver to use...
	I0127 11:14:43.101263    5908 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0127 11:14:43.134780    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:14:43.168305    5908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 11:14:43.207241    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:14:43.239046    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 11:14:43.273294    5908 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 11:14:43.330357    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 11:14:43.352586    5908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:14:43.393283    5908 ssh_runner.go:195] Run: which cri-dockerd
	I0127 11:14:43.408902    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0127 11:14:43.427457    5908 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0127 11:14:43.466740    5908 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0127 11:14:43.655015    5908 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0127 11:14:43.864872    5908 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0127 11:14:43.864983    5908 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0127 11:14:43.908832    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:14:44.109837    5908 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0127 11:14:46.691962    5908 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5820983s)
	I0127 11:14:46.703337    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0127 11:14:46.736111    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0127 11:14:46.768059    5908 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0127 11:14:46.948019    5908 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0127 11:14:47.157589    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:14:47.357758    5908 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0127 11:14:47.395355    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0127 11:14:47.426466    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:14:47.615998    5908 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0127 11:14:47.724875    5908 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0127 11:14:47.735628    5908 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0127 11:14:47.744491    5908 start.go:563] Will wait 60s for crictl version
	I0127 11:14:47.755086    5908 ssh_runner.go:195] Run: which crictl
	I0127 11:14:47.771798    5908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:14:47.835071    5908 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0127 11:14:47.844512    5908 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 11:14:47.890277    5908 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 11:14:47.930823    5908 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0127 11:14:47.935065    5908 out.go:177]   - env NO_PROXY=172.29.192.249
	I0127 11:14:47.937658    5908 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0127 11:14:47.941705    5908 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0127 11:14:47.941705    5908 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0127 11:14:47.941705    5908 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0127 11:14:47.941705    5908 ip.go:211] Found interface: {Index:17 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:43:05:a6 Flags:up|broadcast|multicast|running}
	I0127 11:14:47.944705    5908 ip.go:214] interface addr: fe80::8ceb:a58b:811a:7c79/64
	I0127 11:14:47.944705    5908 ip.go:214] interface addr: 172.29.192.1/20
	I0127 11:14:47.957078    5908 ssh_runner.go:195] Run: grep 172.29.192.1	host.minikube.internal$ /etc/hosts
	I0127 11:14:47.964299    5908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:14:47.984649    5908 mustload.go:65] Loading cluster: ha-011400
	I0127 11:14:47.984775    5908 config.go:182] Loaded profile config "ha-011400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 11:14:47.985849    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:14:49.983173    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:14:49.983173    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:49.983645    5908 host.go:66] Checking if "ha-011400" exists ...
	I0127 11:14:49.986429    5908 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400 for IP: 172.29.195.173
	I0127 11:14:49.986497    5908 certs.go:194] generating shared ca certs ...
	I0127 11:14:49.986497    5908 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:14:49.987265    5908 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0127 11:14:49.987572    5908 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0127 11:14:49.987837    5908 certs.go:256] generating profile certs ...
	I0127 11:14:49.988013    5908 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\client.key
	I0127 11:14:49.988558    5908 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key.01c75780
	I0127 11:14:49.988746    5908 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt.01c75780 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.29.192.249 172.29.195.173 172.29.207.254]
	I0127 11:14:50.209513    5908 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt.01c75780 ...
	I0127 11:14:50.209513    5908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt.01c75780: {Name:mk2dd436a578522815aab4ccec2d6480bc93b80b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:14:50.211226    5908 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key.01c75780 ...
	I0127 11:14:50.211226    5908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key.01c75780: {Name:mkcaa48240e7c60511aea566a82f2f37f1d4033b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:14:50.212167    5908 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt.01c75780 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt
	I0127 11:14:50.228840    5908 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key.01c75780 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key
	I0127 11:14:50.229623    5908 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.key
	I0127 11:14:50.229623    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0127 11:14:50.229623    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0127 11:14:50.230421    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0127 11:14:50.230458    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0127 11:14:50.230668    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0127 11:14:50.230801    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0127 11:14:50.230801    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0127 11:14:50.231366    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0127 11:14:50.231366    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem (1338 bytes)
	W0127 11:14:50.232208    5908 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956_empty.pem, impossibly tiny 0 bytes
	I0127 11:14:50.232398    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0127 11:14:50.232666    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0127 11:14:50.233132    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0127 11:14:50.233132    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0127 11:14:50.234151    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem (1708 bytes)
	I0127 11:14:50.234151    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:14:50.234701    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem -> /usr/share/ca-certificates/5956.pem
	I0127 11:14:50.234978    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> /usr/share/ca-certificates/59562.pem
	I0127 11:14:50.235029    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:14:52.318789    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:14:52.318789    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:52.318863    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:14:54.756279    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:14:54.756727    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:54.756783    5908 sshutil.go:53] new ssh client: &{IP:172.29.192.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\id_rsa Username:docker}
	I0127 11:14:54.857617    5908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0127 11:14:54.864847    5908 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0127 11:14:54.899656    5908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0127 11:14:54.907708    5908 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0127 11:14:54.937929    5908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0127 11:14:54.947372    5908 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0127 11:14:54.975175    5908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0127 11:14:54.983883    5908 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0127 11:14:55.014804    5908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0127 11:14:55.021130    5908 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0127 11:14:55.056421    5908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0127 11:14:55.063003    5908 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0127 11:14:55.080731    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:14:55.130642    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 11:14:55.177266    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:14:55.223142    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 11:14:55.265417    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0127 11:14:55.310758    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 11:14:55.356618    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:14:55.409870    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 11:14:55.457580    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:14:55.504504    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem --> /usr/share/ca-certificates/5956.pem (1338 bytes)
	I0127 11:14:55.551850    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem --> /usr/share/ca-certificates/59562.pem (1708 bytes)
	I0127 11:14:55.597832    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0127 11:14:55.633740    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0127 11:14:55.666109    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0127 11:14:55.696343    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0127 11:14:55.725631    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0127 11:14:55.756277    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0127 11:14:55.787169    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0127 11:14:55.828452    5908 ssh_runner.go:195] Run: openssl version
	I0127 11:14:55.848061    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5956.pem && ln -fs /usr/share/ca-certificates/5956.pem /etc/ssl/certs/5956.pem"
	I0127 11:14:55.876854    5908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5956.pem
	I0127 11:14:55.884725    5908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:52 /usr/share/ca-certificates/5956.pem
	I0127 11:14:55.897240    5908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5956.pem
	I0127 11:14:55.917841    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5956.pem /etc/ssl/certs/51391683.0"
	I0127 11:14:55.949207    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59562.pem && ln -fs /usr/share/ca-certificates/59562.pem /etc/ssl/certs/59562.pem"
	I0127 11:14:55.980437    5908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59562.pem
	I0127 11:14:55.988207    5908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:52 /usr/share/ca-certificates/59562.pem
	I0127 11:14:55.998808    5908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59562.pem
	I0127 11:14:56.020281    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59562.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:14:56.053161    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:14:56.086377    5908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:14:56.092530    5908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:14:56.103036    5908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:14:56.123053    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:14:56.155730    5908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:14:56.162092    5908 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 11:14:56.162398    5908 kubeadm.go:934] updating node {m02 172.29.195.173 8443 v1.32.1 docker true true} ...
	I0127 11:14:56.162514    5908 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-011400-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.195.173
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:ha-011400 Namespace:default APIServerHAVIP:172.29.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 11:14:56.162720    5908 kube-vip.go:115] generating kube-vip config ...
	I0127 11:14:56.174864    5908 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0127 11:14:56.204828    5908 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0127 11:14:56.204828    5908 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.29.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.9
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0127 11:14:56.215922    5908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 11:14:56.233146    5908 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.1': No such file or directory
	
	Initiating transfer...
	I0127 11:14:56.245910    5908 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.1
	I0127 11:14:56.275297    5908 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl
	I0127 11:14:56.275431    5908 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet
	I0127 11:14:56.275431    5908 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm
	I0127 11:14:57.516950    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:14:57.550483    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet -> /var/lib/minikube/binaries/v1.32.1/kubelet
	I0127 11:14:57.560869    5908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet
	I0127 11:14:57.567867    5908 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubelet': No such file or directory
	I0127 11:14:57.567867    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet --> /var/lib/minikube/binaries/v1.32.1/kubelet (77398276 bytes)
	I0127 11:14:57.590891    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm -> /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0127 11:14:57.601877    5908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0127 11:14:57.672716    5908 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubeadm': No such file or directory
	I0127 11:14:57.672969    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm --> /var/lib/minikube/binaries/v1.32.1/kubeadm (70942872 bytes)
	I0127 11:14:57.839401    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl -> /var/lib/minikube/binaries/v1.32.1/kubectl
	I0127 11:14:57.850235    5908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl
	I0127 11:14:57.870924    5908 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubectl': No such file or directory
	I0127 11:14:57.870924    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl --> /var/lib/minikube/binaries/v1.32.1/kubectl (57323672 bytes)
	I0127 11:14:59.174722    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0127 11:14:59.194211    5908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0127 11:14:59.227352    5908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:14:59.256896    5908 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0127 11:14:59.302312    5908 ssh_runner.go:195] Run: grep 172.29.207.254	control-plane.minikube.internal$ /etc/hosts
	I0127 11:14:59.309138    5908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:14:59.340210    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:14:59.551867    5908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:14:59.585050    5908 host.go:66] Checking if "ha-011400" exists ...
	I0127 11:14:59.585947    5908 start.go:317] joinCluster: &{Name:ha-011400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-011400 Namespace:default APIServerHAVIP:172.
29.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.192.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.195.173 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\
jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:14:59.586142    5908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0127 11:14:59.586268    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:15:01.757153    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:15:01.757153    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:15:01.757346    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:15:04.319045    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:15:04.319226    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:15:04.319345    5908 sshutil.go:53] new ssh client: &{IP:172.29.192.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\id_rsa Username:docker}
	I0127 11:15:04.786583    5908 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.2002402s)
	I0127 11:15:04.786687    5908 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.29.195.173 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 11:15:04.786777    5908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7etute.u7301vj52t2o46lo --discovery-token-ca-cert-hash sha256:7b53ba02e26824ededdf08178373023b65bda8005ddad46edfe91cb6a3cb8d3f --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-011400-m02 --control-plane --apiserver-advertise-address=172.29.195.173 --apiserver-bind-port=8443"
	I0127 11:15:45.670358    5908 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7etute.u7301vj52t2o46lo --discovery-token-ca-cert-hash sha256:7b53ba02e26824ededdf08178373023b65bda8005ddad46edfe91cb6a3cb8d3f --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-011400-m02 --control-plane --apiserver-advertise-address=172.29.195.173 --apiserver-bind-port=8443": (40.8831562s)
	I0127 11:15:45.670485    5908 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0127 11:15:46.448867    5908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-011400-m02 minikube.k8s.io/updated_at=2025_01_27T11_15_46_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650 minikube.k8s.io/name=ha-011400 minikube.k8s.io/primary=false
	I0127 11:15:46.695142    5908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-011400-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0127 11:15:46.834632    5908 start.go:319] duration metric: took 47.2481943s to joinCluster
	I0127 11:15:46.834632    5908 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.29.195.173 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 11:15:46.835457    5908 config.go:182] Loaded profile config "ha-011400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 11:15:46.839318    5908 out.go:177] * Verifying Kubernetes components...
	I0127 11:15:46.853307    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:15:47.228955    5908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:15:47.268969    5908 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0127 11:15:47.270252    5908 kapi.go:59] client config for ha-011400: &rest.Config{Host:"https://172.29.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-011400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-011400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x301e420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0127 11:15:47.270431    5908 kubeadm.go:483] Overriding stale ClientConfig host https://172.29.207.254:8443 with https://172.29.192.249:8443
	I0127 11:15:47.271131    5908 node_ready.go:35] waiting up to 6m0s for node "ha-011400-m02" to be "Ready" ...
	I0127 11:15:47.271131    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:47.271670    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:47.271670    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:47.271730    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:47.295386    5908 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0127 11:15:47.771798    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:47.771798    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:47.771798    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:47.771798    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:47.779743    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:15:48.271933    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:48.271933    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:48.271933    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:48.271933    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:48.280934    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:15:48.772238    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:48.772238    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:48.772238    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:48.772238    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:48.778224    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:15:49.272041    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:49.272041    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:49.272041    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:49.272041    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:49.278846    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:15:49.279696    5908 node_ready.go:53] node "ha-011400-m02" has status "Ready":"False"
	I0127 11:15:49.772041    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:49.772041    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:49.772041    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:49.772041    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:49.778009    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:15:50.271984    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:50.271984    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:50.271984    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:50.271984    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:50.276654    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:15:50.771689    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:50.771830    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:50.771830    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:50.771830    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:50.777784    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:15:51.271599    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:51.271599    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:51.271599    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:51.271599    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:51.613797    5908 round_trippers.go:574] Response Status: 200 OK in 342 milliseconds
	I0127 11:15:51.614912    5908 node_ready.go:53] node "ha-011400-m02" has status "Ready":"False"
	I0127 11:15:51.771528    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:51.771586    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:51.771586    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:51.771586    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:51.916471    5908 round_trippers.go:574] Response Status: 200 OK in 144 milliseconds
	I0127 11:15:52.271299    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:52.271299    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:52.271299    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:52.271299    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:52.284030    5908 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0127 11:15:52.771644    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:52.771644    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:52.771644    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:52.771644    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:52.776799    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:15:53.272529    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:53.272529    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:53.272529    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:53.272529    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:53.279392    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:15:53.772241    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:53.772371    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:53.772371    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:53.772371    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:53.778625    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:15:53.779717    5908 node_ready.go:53] node "ha-011400-m02" has status "Ready":"False"
	I0127 11:15:54.271751    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:54.271751    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:54.271751    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:54.271751    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:54.277676    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:15:54.771626    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:54.771626    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:54.771626    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:54.771626    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:54.783650    5908 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0127 11:15:55.272859    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:55.272859    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:55.272859    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:55.272859    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:55.277241    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:15:55.772313    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:55.772313    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:55.772313    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:55.772313    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:55.781274    5908 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0127 11:15:55.782556    5908 node_ready.go:53] node "ha-011400-m02" has status "Ready":"False"
	I0127 11:15:56.271996    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:56.272413    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:56.272413    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:56.272413    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:56.277325    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:15:56.772396    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:56.772396    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:56.772396    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:56.772396    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:56.777347    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:15:57.271406    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:57.271406    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:57.271406    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:57.271406    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:57.276114    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:15:57.771796    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:57.771796    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:57.771796    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:57.771796    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:57.777155    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:15:58.271727    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:58.271727    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:58.271727    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:58.271727    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:58.285643    5908 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0127 11:15:58.286074    5908 node_ready.go:53] node "ha-011400-m02" has status "Ready":"False"
	I0127 11:15:58.772266    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:58.772266    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:58.772266    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:58.772266    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:58.778316    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:15:59.272186    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:59.272186    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:59.272186    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:59.272186    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:59.276635    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:15:59.771427    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:59.771427    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:59.771427    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:59.771427    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:59.777032    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:16:00.272536    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:00.272613    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:00.272613    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:00.272613    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:00.279414    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:16:00.772119    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:00.772185    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:00.772185    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:00.772185    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:00.778138    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:16:00.778971    5908 node_ready.go:53] node "ha-011400-m02" has status "Ready":"False"
	I0127 11:16:01.272002    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:01.272002    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:01.272002    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:01.272002    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:01.277104    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:16:01.772296    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:01.772296    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:01.772296    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:01.772296    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:01.777636    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:16:02.272592    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:02.272592    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:02.272592    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:02.272592    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:02.277341    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:16:02.771798    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:02.771798    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:02.771798    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:02.771798    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:02.776701    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:16:03.272612    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:03.272612    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:03.272612    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:03.272757    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:03.277893    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:16:03.278603    5908 node_ready.go:53] node "ha-011400-m02" has status "Ready":"False"
	I0127 11:16:03.771520    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:03.771520    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:03.771520    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:03.771520    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:03.777757    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:16:04.272436    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:04.272436    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:04.272436    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:04.272436    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:04.278936    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:16:04.771987    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:04.772042    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:04.772042    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:04.772042    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:04.780046    5908 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0127 11:16:05.272322    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:05.272393    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:05.272393    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:05.272393    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:05.277324    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:16:05.771998    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:05.771998    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:05.771998    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:05.771998    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:05.778294    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:16:05.779962    5908 node_ready.go:53] node "ha-011400-m02" has status "Ready":"False"
	I0127 11:16:06.272466    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:06.272466    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:06.272466    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:06.272466    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:06.278170    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:16:06.771864    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:06.771864    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:06.771864    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:06.771864    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:06.777203    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:16:07.272649    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:07.272649    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:07.272649    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:07.272649    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:07.286576    5908 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0127 11:16:07.771820    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:07.771820    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:07.771820    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:07.771820    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:07.780255    5908 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0127 11:16:07.780379    5908 node_ready.go:53] node "ha-011400-m02" has status "Ready":"False"
	I0127 11:16:08.272368    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:08.272368    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:08.272368    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:08.272368    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:08.276298    5908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 11:16:08.278161    5908 node_ready.go:49] node "ha-011400-m02" has status "Ready":"True"
	I0127 11:16:08.278161    5908 node_ready.go:38] duration metric: took 21.0068112s for node "ha-011400-m02" to be "Ready" ...
	I0127 11:16:08.278161    5908 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:16:08.278324    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods
	I0127 11:16:08.278324    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:08.278388    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:08.278388    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:08.284651    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:16:08.293309    5908 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-228t7" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:08.293309    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-228t7
	I0127 11:16:08.293309    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:08.293309    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:08.293309    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:08.297673    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:16:08.298928    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:16:08.298928    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:08.298928    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:08.298928    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:08.303240    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:16:08.304146    5908 pod_ready.go:93] pod "coredns-668d6bf9bc-228t7" in "kube-system" namespace has status "Ready":"True"
	I0127 11:16:08.304146    5908 pod_ready.go:82] duration metric: took 10.8363ms for pod "coredns-668d6bf9bc-228t7" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:08.304146    5908 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-8b9xh" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:08.304369    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-8b9xh
	I0127 11:16:08.304369    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:08.304369    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:08.304369    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:08.307871    5908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 11:16:08.309175    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:16:08.309224    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:08.309274    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:08.309274    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:08.313091    5908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 11:16:08.314449    5908 pod_ready.go:93] pod "coredns-668d6bf9bc-8b9xh" in "kube-system" namespace has status "Ready":"True"
	I0127 11:16:08.314632    5908 pod_ready.go:82] duration metric: took 10.3754ms for pod "coredns-668d6bf9bc-8b9xh" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:08.314632    5908 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:08.314744    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-011400
	I0127 11:16:08.314845    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:08.314845    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:08.314845    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:08.318642    5908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 11:16:08.319427    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:16:08.319427    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:08.319427    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:08.319427    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:08.324767    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:16:08.324930    5908 pod_ready.go:93] pod "etcd-ha-011400" in "kube-system" namespace has status "Ready":"True"
	I0127 11:16:08.324930    5908 pod_ready.go:82] duration metric: took 10.2974ms for pod "etcd-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:08.324930    5908 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:08.324930    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-011400-m02
	I0127 11:16:08.325460    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:08.325460    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:08.325460    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:08.334598    5908 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0127 11:16:08.334598    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:08.335231    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:08.335231    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:08.335231    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:08.340937    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:16:08.341907    5908 pod_ready.go:93] pod "etcd-ha-011400-m02" in "kube-system" namespace has status "Ready":"True"
	I0127 11:16:08.341907    5908 pod_ready.go:82] duration metric: took 16.9769ms for pod "etcd-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:08.342004    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:08.472056    5908 request.go:632] Waited for 129.9962ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-011400
	I0127 11:16:08.472056    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-011400
	I0127 11:16:08.472513    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:08.472544    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:08.472544    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:08.476974    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:16:08.672915    5908 request.go:632] Waited for 194.9814ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:16:08.673256    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:16:08.673256    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:08.673256    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:08.673256    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:08.678938    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:16:08.679763    5908 pod_ready.go:93] pod "kube-apiserver-ha-011400" in "kube-system" namespace has status "Ready":"True"
	I0127 11:16:08.679920    5908 pod_ready.go:82] duration metric: took 337.8775ms for pod "kube-apiserver-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:08.679920    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:08.872398    5908 request.go:632] Waited for 192.4042ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-011400-m02
	I0127 11:16:08.872398    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-011400-m02
	I0127 11:16:08.872398    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:08.872398    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:08.872398    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:08.878014    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:16:09.072446    5908 request.go:632] Waited for 193.3336ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:09.072446    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:09.072446    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:09.072446    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:09.072446    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:09.079783    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:16:09.080845    5908 pod_ready.go:93] pod "kube-apiserver-ha-011400-m02" in "kube-system" namespace has status "Ready":"True"
	I0127 11:16:09.080901    5908 pod_ready.go:82] duration metric: took 400.9213ms for pod "kube-apiserver-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:09.080901    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:09.272762    5908 request.go:632] Waited for 191.7266ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-011400
	I0127 11:16:09.273279    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-011400
	I0127 11:16:09.273279    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:09.273279    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:09.273279    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:09.278678    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:16:09.472021    5908 request.go:632] Waited for 192.6179ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:16:09.472301    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:16:09.472301    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:09.472301    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:09.472301    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:09.479797    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:16:09.480828    5908 pod_ready.go:93] pod "kube-controller-manager-ha-011400" in "kube-system" namespace has status "Ready":"True"
	I0127 11:16:09.480911    5908 pod_ready.go:82] duration metric: took 400.0068ms for pod "kube-controller-manager-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:09.480911    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:09.672283    5908 request.go:632] Waited for 191.3691ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-011400-m02
	I0127 11:16:09.672728    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-011400-m02
	I0127 11:16:09.672728    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:09.672728    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:09.672728    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:09.678599    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:16:09.872281    5908 request.go:632] Waited for 192.7668ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:09.872281    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:09.872781    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:09.872972    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:09.873062    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:09.878833    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:16:09.878961    5908 pod_ready.go:93] pod "kube-controller-manager-ha-011400-m02" in "kube-system" namespace has status "Ready":"True"
	I0127 11:16:09.878961    5908 pod_ready.go:82] duration metric: took 398.0457ms for pod "kube-controller-manager-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:09.878961    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hg72m" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:10.072092    5908 request.go:632] Waited for 193.1289ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg72m
	I0127 11:16:10.072092    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg72m
	I0127 11:16:10.072092    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:10.072092    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:10.072092    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:10.079764    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:16:10.272650    5908 request.go:632] Waited for 191.8681ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:16:10.272650    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:16:10.273177    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:10.273216    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:10.273216    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:10.278267    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:16:10.279104    5908 pod_ready.go:93] pod "kube-proxy-hg72m" in "kube-system" namespace has status "Ready":"True"
	I0127 11:16:10.279104    5908 pod_ready.go:82] duration metric: took 400.1388ms for pod "kube-proxy-hg72m" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:10.279223    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x52km" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:10.472068    5908 request.go:632] Waited for 192.8433ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x52km
	I0127 11:16:10.472068    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x52km
	I0127 11:16:10.472068    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:10.472068    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:10.472068    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:10.477030    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:16:10.672769    5908 request.go:632] Waited for 194.7103ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:10.673207    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:10.673241    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:10.673283    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:10.673283    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:10.681044    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:16:10.681652    5908 pod_ready.go:93] pod "kube-proxy-x52km" in "kube-system" namespace has status "Ready":"True"
	I0127 11:16:10.681652    5908 pod_ready.go:82] duration metric: took 402.4255ms for pod "kube-proxy-x52km" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:10.681652    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:10.873194    5908 request.go:632] Waited for 191.5393ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-011400
	I0127 11:16:10.873194    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-011400
	I0127 11:16:10.873194    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:10.873194    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:10.873194    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:10.878158    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:16:11.073007    5908 request.go:632] Waited for 193.0754ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:16:11.073311    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:16:11.073311    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:11.073385    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:11.073385    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:11.079942    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:16:11.080807    5908 pod_ready.go:93] pod "kube-scheduler-ha-011400" in "kube-system" namespace has status "Ready":"True"
	I0127 11:16:11.080807    5908 pod_ready.go:82] duration metric: took 399.1503ms for pod "kube-scheduler-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:11.080807    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:11.272419    5908 request.go:632] Waited for 191.6099ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-011400-m02
	I0127 11:16:11.272419    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-011400-m02
	I0127 11:16:11.272419    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:11.272419    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:11.272419    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:11.278551    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:16:11.472203    5908 request.go:632] Waited for 192.9058ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:11.472203    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:11.472203    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:11.472203    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:11.472203    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:11.477496    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:16:11.478524    5908 pod_ready.go:93] pod "kube-scheduler-ha-011400-m02" in "kube-system" namespace has status "Ready":"True"
	I0127 11:16:11.478524    5908 pod_ready.go:82] duration metric: took 397.7132ms for pod "kube-scheduler-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:11.478524    5908 pod_ready.go:39] duration metric: took 3.2002488s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:16:11.478609    5908 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:16:11.489806    5908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:16:11.515702    5908 api_server.go:72] duration metric: took 24.6808131s to wait for apiserver process to appear ...
	I0127 11:16:11.515749    5908 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:16:11.515749    5908 api_server.go:253] Checking apiserver healthz at https://172.29.192.249:8443/healthz ...
	I0127 11:16:11.532748    5908 api_server.go:279] https://172.29.192.249:8443/healthz returned 200:
	ok
	I0127 11:16:11.532876    5908 round_trippers.go:463] GET https://172.29.192.249:8443/version
	I0127 11:16:11.532950    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:11.532950    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:11.532950    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:11.536422    5908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 11:16:11.536422    5908 api_server.go:141] control plane version: v1.32.1
	I0127 11:16:11.536422    5908 api_server.go:131] duration metric: took 20.6721ms to wait for apiserver health ...
	I0127 11:16:11.536422    5908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:16:11.673267    5908 request.go:632] Waited for 136.8444ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods
	I0127 11:16:11.673267    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods
	I0127 11:16:11.673267    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:11.673267    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:11.673267    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:11.680282    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:16:11.688159    5908 system_pods.go:59] 17 kube-system pods found
	I0127 11:16:11.688159    5908 system_pods.go:61] "coredns-668d6bf9bc-228t7" [ac40dfec-9e9f-4414-9259-a7dadfb2c93d] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "coredns-668d6bf9bc-8b9xh" [647a1e55-d5ce-4f2b-933f-8caf13d7463b] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "etcd-ha-011400" [90238c1c-70b2-47e8-9bab-49f2334ca4b3] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "etcd-ha-011400-m02" [fcda2776-bc47-47af-948a-94e549a41fec] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "kindnet-fs97j" [d480fa1c-808e-4c5d-818e-26281dca23d4] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "kindnet-ll5br" [6a2a0fea-258a-4593-8445-398f37e379e4] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "kube-apiserver-ha-011400" [7bda282c-7cb1-46f1-9bb8-366bc992aaed] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "kube-apiserver-ha-011400-m02" [8e5dcd2c-fbca-473d-8aa2-70e7fb8866c7] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "kube-controller-manager-ha-011400" [1b8e425d-03da-4d95-86e5-e1e6f15b64bd] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "kube-controller-manager-ha-011400-m02" [c20cfbe1-337f-462f-968f-c19741634ac4] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "kube-proxy-hg72m" [dc860339-d169-452b-9621-170ae73c7a5e] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "kube-proxy-x52km" [0a6cc7f2-2b15-4db1-b5fb-d6448d4bd295] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "kube-scheduler-ha-011400" [35220ede-c59f-4d24-88c5-728088af2abf] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "kube-scheduler-ha-011400-m02" [250614b6-0c08-4e8a-a080-58253b81d4f7] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "kube-vip-ha-011400" [31c47527-c1fe-4064-bcb4-faffcedab1f4] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "kube-vip-ha-011400-m02" [1e3bf93b-caab-4f37-a8bd-36f0ad76eb4c] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "storage-provisioner" [2755d063-0183-41c1-9fe8-e533017aef39] Running
	I0127 11:16:11.688159    5908 system_pods.go:74] duration metric: took 151.7354ms to wait for pod list to return data ...
	I0127 11:16:11.688159    5908 default_sa.go:34] waiting for default service account to be created ...
	I0127 11:16:11.873149    5908 request.go:632] Waited for 184.9887ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/default/serviceaccounts
	I0127 11:16:11.873476    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/default/serviceaccounts
	I0127 11:16:11.873476    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:11.873476    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:11.873476    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:11.879842    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:16:11.879842    5908 default_sa.go:45] found service account: "default"
	I0127 11:16:11.879842    5908 default_sa.go:55] duration metric: took 191.6813ms for default service account to be created ...
	I0127 11:16:11.879842    5908 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 11:16:12.072143    5908 request.go:632] Waited for 192.2991ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods
	I0127 11:16:12.072362    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods
	I0127 11:16:12.072362    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:12.072362    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:12.072362    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:12.080025    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:16:12.086589    5908 system_pods.go:87] 17 kube-system pods found
	I0127 11:16:12.086673    5908 system_pods.go:105] "coredns-668d6bf9bc-228t7" [ac40dfec-9e9f-4414-9259-a7dadfb2c93d] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "coredns-668d6bf9bc-8b9xh" [647a1e55-d5ce-4f2b-933f-8caf13d7463b] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "etcd-ha-011400" [90238c1c-70b2-47e8-9bab-49f2334ca4b3] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "etcd-ha-011400-m02" [fcda2776-bc47-47af-948a-94e549a41fec] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "kindnet-fs97j" [d480fa1c-808e-4c5d-818e-26281dca23d4] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "kindnet-ll5br" [6a2a0fea-258a-4593-8445-398f37e379e4] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "kube-apiserver-ha-011400" [7bda282c-7cb1-46f1-9bb8-366bc992aaed] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "kube-apiserver-ha-011400-m02" [8e5dcd2c-fbca-473d-8aa2-70e7fb8866c7] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "kube-controller-manager-ha-011400" [1b8e425d-03da-4d95-86e5-e1e6f15b64bd] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "kube-controller-manager-ha-011400-m02" [c20cfbe1-337f-462f-968f-c19741634ac4] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "kube-proxy-hg72m" [dc860339-d169-452b-9621-170ae73c7a5e] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "kube-proxy-x52km" [0a6cc7f2-2b15-4db1-b5fb-d6448d4bd295] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "kube-scheduler-ha-011400" [35220ede-c59f-4d24-88c5-728088af2abf] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "kube-scheduler-ha-011400-m02" [250614b6-0c08-4e8a-a080-58253b81d4f7] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "kube-vip-ha-011400" [31c47527-c1fe-4064-bcb4-faffcedab1f4] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "kube-vip-ha-011400-m02" [1e3bf93b-caab-4f37-a8bd-36f0ad76eb4c] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "storage-provisioner" [2755d063-0183-41c1-9fe8-e533017aef39] Running
	I0127 11:16:12.086673    5908 system_pods.go:147] duration metric: took 206.8292ms to wait for k8s-apps to be running ...
	I0127 11:16:12.086673    5908 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 11:16:12.097723    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:16:12.123285    5908 system_svc.go:56] duration metric: took 36.6119ms WaitForService to wait for kubelet
	I0127 11:16:12.123285    5908 kubeadm.go:582] duration metric: took 25.28839s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 11:16:12.123285    5908 node_conditions.go:102] verifying NodePressure condition ...
	I0127 11:16:12.273572    5908 request.go:632] Waited for 150.2846ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes
	I0127 11:16:12.273572    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes
	I0127 11:16:12.273572    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:12.273572    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:12.273572    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:12.280318    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:16:12.281994    5908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 11:16:12.282059    5908 node_conditions.go:123] node cpu capacity is 2
	I0127 11:16:12.282059    5908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 11:16:12.282059    5908 node_conditions.go:123] node cpu capacity is 2
	I0127 11:16:12.282059    5908 node_conditions.go:105] duration metric: took 158.772ms to run NodePressure ...
	I0127 11:16:12.282059    5908 start.go:241] waiting for startup goroutines ...
	I0127 11:16:12.282151    5908 start.go:255] writing updated cluster config ...
	I0127 11:16:12.290415    5908 out.go:201] 
	I0127 11:16:12.313429    5908 config.go:182] Loaded profile config "ha-011400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 11:16:12.313429    5908 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\config.json ...
	I0127 11:16:12.327008    5908 out.go:177] * Starting "ha-011400-m03" control-plane node in "ha-011400" cluster
	I0127 11:16:12.330319    5908 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0127 11:16:12.330319    5908 cache.go:56] Caching tarball of preloaded images
	I0127 11:16:12.331471    5908 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 11:16:12.331471    5908 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0127 11:16:12.332109    5908 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\config.json ...
	I0127 11:16:12.334747    5908 start.go:360] acquireMachinesLock for ha-011400-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 11:16:12.334945    5908 start.go:364] duration metric: took 115.7µs to acquireMachinesLock for "ha-011400-m03"
	I0127 11:16:12.335231    5908 start.go:93] Provisioning new machine with config: &{Name:ha-011400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-011400 Namespace:def
ault APIServerHAVIP:172.29.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.192.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.195.173 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false i
stio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 11:16:12.335447    5908 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0127 11:16:12.339826    5908 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0127 11:16:12.339826    5908 start.go:159] libmachine.API.Create for "ha-011400" (driver="hyperv")
	I0127 11:16:12.339826    5908 client.go:168] LocalClient.Create starting
	I0127 11:16:12.340648    5908 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0127 11:16:12.341303    5908 main.go:141] libmachine: Decoding PEM data...
	I0127 11:16:12.341303    5908 main.go:141] libmachine: Parsing certificate...
	I0127 11:16:12.341303    5908 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0127 11:16:12.342016    5908 main.go:141] libmachine: Decoding PEM data...
	I0127 11:16:12.342016    5908 main.go:141] libmachine: Parsing certificate...
	I0127 11:16:12.342016    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0127 11:16:14.200229    5908 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0127 11:16:14.201076    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:14.201076    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0127 11:16:15.919387    5908 main.go:141] libmachine: [stdout =====>] : False
	
	I0127 11:16:15.919387    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:15.919886    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0127 11:16:17.390618    5908 main.go:141] libmachine: [stdout =====>] : True
	
	I0127 11:16:17.391728    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:17.391728    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0127 11:16:21.059517    5908 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0127 11:16:21.060377    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:21.062408    5908 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 11:16:21.594430    5908 main.go:141] libmachine: Creating SSH key...
	I0127 11:16:21.805933    5908 main.go:141] libmachine: Creating VM...
	I0127 11:16:21.806868    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0127 11:16:24.691501    5908 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0127 11:16:24.691869    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:24.691924    5908 main.go:141] libmachine: Using switch "Default Switch"
	I0127 11:16:24.691924    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0127 11:16:26.525666    5908 main.go:141] libmachine: [stdout =====>] : True
	
	I0127 11:16:26.526422    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:26.526422    5908 main.go:141] libmachine: Creating VHD
	I0127 11:16:26.526516    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0127 11:16:30.309479    5908 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 4E7FCE85-94FA-4073-A6ED-9004DCC96862
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0127 11:16:30.309479    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:30.309479    5908 main.go:141] libmachine: Writing magic tar header
	I0127 11:16:30.309479    5908 main.go:141] libmachine: Writing SSH key tar header
	I0127 11:16:30.321404    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0127 11:16:33.502890    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:16:33.503628    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:33.503872    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m03\disk.vhd' -SizeBytes 20000MB
	I0127 11:16:36.003207    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:16:36.003207    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:36.003207    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-011400-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0127 11:16:39.593101    5908 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-011400-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0127 11:16:39.593101    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:39.593101    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-011400-m03 -DynamicMemoryEnabled $false
	I0127 11:16:41.813108    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:16:41.813108    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:41.813819    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-011400-m03 -Count 2
	I0127 11:16:43.954430    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:16:43.954430    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:43.954607    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-011400-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m03\boot2docker.iso'
	I0127 11:16:46.524866    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:16:46.525605    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:46.525731    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-011400-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m03\disk.vhd'
	I0127 11:16:49.182904    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:16:49.182904    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:49.182904    5908 main.go:141] libmachine: Starting VM...
	I0127 11:16:49.182904    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-011400-m03
	I0127 11:16:52.203004    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:16:52.203004    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:52.203004    5908 main.go:141] libmachine: Waiting for host to start...
	I0127 11:16:52.203004    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:16:54.486804    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:16:54.487829    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:54.487921    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:16:56.964271    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:16:56.964271    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:57.965198    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:17:00.212655    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:17:00.212655    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:00.212655    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:17:02.719878    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:17:02.719947    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:03.721167    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:17:05.884562    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:17:05.884562    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:05.885561    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:17:08.370763    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:17:08.370763    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:09.371503    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:17:11.584706    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:17:11.584706    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:11.584706    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:17:14.063448    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:17:14.063448    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:15.064040    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:17:17.251342    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:17:17.251342    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:17.251342    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:17:19.827445    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:17:19.827445    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:19.827445    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:17:21.901824    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:17:21.901824    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:21.901824    5908 machine.go:93] provisionDockerMachine start ...
	I0127 11:17:21.902488    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:17:24.036887    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:17:24.037910    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:24.037984    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:17:26.532863    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:17:26.532863    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:26.538403    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:17:26.539152    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.196.110 22 <nil> <nil>}
	I0127 11:17:26.539152    5908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 11:17:26.667484    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 11:17:26.667484    5908 buildroot.go:166] provisioning hostname "ha-011400-m03"
	I0127 11:17:26.667484    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:17:28.740837    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:17:28.741867    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:28.741867    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:17:31.283621    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:17:31.283621    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:31.289147    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:17:31.289228    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.196.110 22 <nil> <nil>}
	I0127 11:17:31.289228    5908 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-011400-m03 && echo "ha-011400-m03" | sudo tee /etc/hostname
	I0127 11:17:31.433918    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-011400-m03
	
	I0127 11:17:31.433918    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:17:33.567824    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:17:33.567824    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:33.567974    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:17:36.024568    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:17:36.025430    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:36.030387    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:17:36.031007    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.196.110 22 <nil> <nil>}
	I0127 11:17:36.031007    5908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-011400-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-011400-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-011400-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 11:17:36.166976    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:17:36.166976    5908 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0127 11:17:36.167055    5908 buildroot.go:174] setting up certificates
	I0127 11:17:36.167145    5908 provision.go:84] configureAuth start
	I0127 11:17:36.167200    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:17:38.238464    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:17:38.238464    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:38.238464    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:17:40.734427    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:17:40.734427    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:40.734427    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:17:42.861668    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:17:42.861668    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:42.861668    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:17:45.409676    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:17:45.409676    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:45.409779    5908 provision.go:143] copyHostCerts
	I0127 11:17:45.410025    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0127 11:17:45.410282    5908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0127 11:17:45.410352    5908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0127 11:17:45.410749    5908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0127 11:17:45.411366    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0127 11:17:45.412125    5908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0127 11:17:45.412188    5908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0127 11:17:45.412188    5908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0127 11:17:45.413596    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0127 11:17:45.413935    5908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0127 11:17:45.414042    5908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0127 11:17:45.414376    5908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0127 11:17:45.415302    5908 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-011400-m03 san=[127.0.0.1 172.29.196.110 ha-011400-m03 localhost minikube]
	I0127 11:17:45.516982    5908 provision.go:177] copyRemoteCerts
	I0127 11:17:45.529791    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 11:17:45.529869    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:17:47.695524    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:17:47.696157    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:47.696426    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:17:50.223462    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:17:50.223657    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:50.224146    5908 sshutil.go:53] new ssh client: &{IP:172.29.196.110 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m03\id_rsa Username:docker}
	I0127 11:17:50.328064    5908 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7981447s)
	I0127 11:17:50.328064    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0127 11:17:50.328749    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 11:17:50.382178    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0127 11:17:50.382756    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 11:17:50.436584    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0127 11:17:50.437058    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 11:17:50.487769    5908 provision.go:87] duration metric: took 14.3204751s to configureAuth
	I0127 11:17:50.487769    5908 buildroot.go:189] setting minikube options for container-runtime
	I0127 11:17:50.488660    5908 config.go:182] Loaded profile config "ha-011400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 11:17:50.488887    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:17:52.582668    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:17:52.582668    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:52.582668    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:17:55.083340    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:17:55.083848    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:55.091466    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:17:55.092349    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.196.110 22 <nil> <nil>}
	I0127 11:17:55.092349    5908 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0127 11:17:55.216448    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0127 11:17:55.216448    5908 buildroot.go:70] root file system type: tmpfs
	I0127 11:17:55.217230    5908 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0127 11:17:55.217230    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:17:57.300448    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:17:57.300448    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:57.300448    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:17:59.821708    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:17:59.821708    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:59.827994    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:17:59.828700    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.196.110 22 <nil> <nil>}
	I0127 11:17:59.828700    5908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.29.192.249"
	Environment="NO_PROXY=172.29.192.249,172.29.195.173"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0127 11:17:59.975249    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.29.192.249
	Environment=NO_PROXY=172.29.192.249,172.29.195.173
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0127 11:17:59.975345    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:18:02.087930    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:18:02.087930    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:02.087930    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:18:04.614202    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:18:04.614202    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:04.620291    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:18:04.620291    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.196.110 22 <nil> <nil>}
	I0127 11:18:04.620291    5908 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0127 11:18:06.816543    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0127 11:18:06.816543    5908 machine.go:96] duration metric: took 44.9142518s to provisionDockerMachine
	I0127 11:18:06.816543    5908 client.go:171] duration metric: took 1m54.4750018s to LocalClient.Create
	I0127 11:18:06.816543    5908 start.go:167] duration metric: took 1m54.4755264s to libmachine.API.Create "ha-011400"
	I0127 11:18:06.816543    5908 start.go:293] postStartSetup for "ha-011400-m03" (driver="hyperv")
	I0127 11:18:06.816543    5908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:18:06.832104    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:18:06.832104    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:18:08.930234    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:18:08.930234    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:08.931246    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:18:11.453563    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:18:11.453563    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:11.454799    5908 sshutil.go:53] new ssh client: &{IP:172.29.196.110 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m03\id_rsa Username:docker}
	I0127 11:18:11.554257    5908 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7221039s)
	I0127 11:18:11.567178    5908 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:18:11.576853    5908 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 11:18:11.576853    5908 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0127 11:18:11.576853    5908 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0127 11:18:11.578732    5908 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> 59562.pem in /etc/ssl/certs
	I0127 11:18:11.578732    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> /etc/ssl/certs/59562.pem
	I0127 11:18:11.591843    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 11:18:11.611510    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem --> /etc/ssl/certs/59562.pem (1708 bytes)
	I0127 11:18:11.657489    5908 start.go:296] duration metric: took 4.8408949s for postStartSetup
	I0127 11:18:11.660484    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:18:13.781966    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:18:13.782395    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:13.782395    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:18:16.343174    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:18:16.343174    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:16.343174    5908 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\config.json ...
	I0127 11:18:16.346162    5908 start.go:128] duration metric: took 2m4.0094253s to createHost
	I0127 11:18:16.346245    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:18:18.470069    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:18:18.470883    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:18.470883    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:18:21.024362    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:18:21.024362    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:21.029192    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:18:21.029522    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.196.110 22 <nil> <nil>}
	I0127 11:18:21.029522    5908 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 11:18:21.156459    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737976701.168097346
	
	I0127 11:18:21.156519    5908 fix.go:216] guest clock: 1737976701.168097346
	I0127 11:18:21.156519    5908 fix.go:229] Guest: 2025-01-27 11:18:21.168097346 +0000 UTC Remote: 2025-01-27 11:18:16.3462458 +0000 UTC m=+549.204315501 (delta=4.821851546s)
	I0127 11:18:21.156637    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:18:23.237155    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:18:23.237155    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:23.237389    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:18:25.749658    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:18:25.749658    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:25.757370    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:18:25.758118    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.196.110 22 <nil> <nil>}
	I0127 11:18:25.758118    5908 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1737976701
	I0127 11:18:25.895516    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 27 11:18:21 UTC 2025
	
	I0127 11:18:25.895516    5908 fix.go:236] clock set: Mon Jan 27 11:18:21 UTC 2025
	 (err=<nil>)
	I0127 11:18:25.895626    5908 start.go:83] releasing machines lock for "ha-011400-m03", held for 2m13.5591827s
	I0127 11:18:25.895833    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:18:28.027116    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:18:28.027699    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:28.027699    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:18:30.572231    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:18:30.572731    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:30.575389    5908 out.go:177] * Found network options:
	I0127 11:18:30.578132    5908 out.go:177]   - NO_PROXY=172.29.192.249,172.29.195.173
	W0127 11:18:30.581096    5908 proxy.go:119] fail to check proxy env: Error ip not in block
	W0127 11:18:30.581161    5908 proxy.go:119] fail to check proxy env: Error ip not in block
	I0127 11:18:30.583464    5908 out.go:177]   - NO_PROXY=172.29.192.249,172.29.195.173
	W0127 11:18:30.585672    5908 proxy.go:119] fail to check proxy env: Error ip not in block
	W0127 11:18:30.585672    5908 proxy.go:119] fail to check proxy env: Error ip not in block
	W0127 11:18:30.587072    5908 proxy.go:119] fail to check proxy env: Error ip not in block
	W0127 11:18:30.587072    5908 proxy.go:119] fail to check proxy env: Error ip not in block
	I0127 11:18:30.588998    5908 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0127 11:18:30.588998    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:18:30.598210    5908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0127 11:18:30.599208    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:18:32.834834    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:18:32.834834    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:32.835176    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:18:32.837233    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:18:32.837286    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:32.837286    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:18:35.533454    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:18:35.533454    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:35.533454    5908 sshutil.go:53] new ssh client: &{IP:172.29.196.110 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m03\id_rsa Username:docker}
	I0127 11:18:35.558170    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:18:35.558170    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:35.558170    5908 sshutil.go:53] new ssh client: &{IP:172.29.196.110 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m03\id_rsa Username:docker}
	I0127 11:18:35.625022    5908 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0257627s)
	W0127 11:18:35.625022    5908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 11:18:35.635998    5908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:18:35.640983    5908 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0519325s)
	W0127 11:18:35.640983    5908 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0127 11:18:35.673937    5908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 11:18:35.674685    5908 start.go:495] detecting cgroup driver to use...
	I0127 11:18:35.674821    5908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:18:35.720910    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	W0127 11:18:35.755071    5908 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0127 11:18:35.755071    5908 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0127 11:18:35.757378    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 11:18:35.778461    5908 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 11:18:35.789284    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 11:18:35.826146    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 11:18:35.855760    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 11:18:35.886604    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 11:18:35.917791    5908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:18:35.948291    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 11:18:35.977945    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 11:18:36.005895    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 11:18:36.037877    5908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:18:36.059735    5908 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 11:18:36.068755    5908 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 11:18:36.101734    5908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:18:36.129217    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:18:36.315439    5908 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 11:18:36.347515    5908 start.go:495] detecting cgroup driver to use...
	I0127 11:18:36.359561    5908 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0127 11:18:36.393846    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:18:36.430207    5908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 11:18:36.474166    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:18:36.517092    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 11:18:36.552566    5908 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 11:18:36.616099    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 11:18:36.638165    5908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:18:36.682044    5908 ssh_runner.go:195] Run: which cri-dockerd
	I0127 11:18:36.698338    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0127 11:18:36.713902    5908 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0127 11:18:36.757998    5908 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0127 11:18:36.941426    5908 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0127 11:18:37.143118    5908 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0127 11:18:37.143118    5908 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0127 11:18:37.187492    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:18:37.388588    5908 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0127 11:18:40.006840    5908 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6180814s)
	I0127 11:18:40.017391    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0127 11:18:40.056297    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0127 11:18:40.097301    5908 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0127 11:18:40.307447    5908 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0127 11:18:40.497962    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:18:40.687858    5908 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0127 11:18:40.726944    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0127 11:18:40.760817    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:18:40.956069    5908 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0127 11:18:41.063705    5908 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0127 11:18:41.075042    5908 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0127 11:18:41.085844    5908 start.go:563] Will wait 60s for crictl version
	I0127 11:18:41.096015    5908 ssh_runner.go:195] Run: which crictl
	I0127 11:18:41.112680    5908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:18:41.172025    5908 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0127 11:18:41.180513    5908 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 11:18:41.234832    5908 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 11:18:41.272197    5908 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0127 11:18:41.274350    5908 out.go:177]   - env NO_PROXY=172.29.192.249
	I0127 11:18:41.277320    5908 out.go:177]   - env NO_PROXY=172.29.192.249,172.29.195.173
	I0127 11:18:41.279225    5908 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0127 11:18:41.283840    5908 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0127 11:18:41.283840    5908 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0127 11:18:41.283840    5908 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0127 11:18:41.283840    5908 ip.go:211] Found interface: {Index:17 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:43:05:a6 Flags:up|broadcast|multicast|running}
	I0127 11:18:41.285844    5908 ip.go:214] interface addr: fe80::8ceb:a58b:811a:7c79/64
	I0127 11:18:41.286854    5908 ip.go:214] interface addr: 172.29.192.1/20
	I0127 11:18:41.295850    5908 ssh_runner.go:195] Run: grep 172.29.192.1	host.minikube.internal$ /etc/hosts
	I0127 11:18:41.302330    5908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:18:41.322658    5908 mustload.go:65] Loading cluster: ha-011400
	I0127 11:18:41.323051    5908 config.go:182] Loaded profile config "ha-011400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 11:18:41.324301    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:18:43.374374    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:18:43.374395    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:43.374458    5908 host.go:66] Checking if "ha-011400" exists ...
	I0127 11:18:43.375251    5908 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400 for IP: 172.29.196.110
	I0127 11:18:43.375310    5908 certs.go:194] generating shared ca certs ...
	I0127 11:18:43.375310    5908 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:18:43.376124    5908 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0127 11:18:43.376183    5908 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0127 11:18:43.376800    5908 certs.go:256] generating profile certs ...
	I0127 11:18:43.377609    5908 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\client.key
	I0127 11:18:43.377769    5908 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key.95379c4e
	I0127 11:18:43.377932    5908 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt.95379c4e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.29.192.249 172.29.195.173 172.29.196.110 172.29.207.254]
	I0127 11:18:43.439771    5908 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt.95379c4e ...
	I0127 11:18:43.439771    5908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt.95379c4e: {Name:mk259769d2cf026cbf29030ab02d7f34cba67948 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:18:43.441727    5908 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key.95379c4e ...
	I0127 11:18:43.441727    5908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key.95379c4e: {Name:mk73aadb8faa148e2210f77a4ec90c72b4380bab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:18:43.442331    5908 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt.95379c4e -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt
	I0127 11:18:43.459737    5908 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key.95379c4e -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key
	I0127 11:18:43.462086    5908 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.key
	I0127 11:18:43.462086    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0127 11:18:43.462336    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0127 11:18:43.462367    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0127 11:18:43.462367    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0127 11:18:43.462367    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0127 11:18:43.462902    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0127 11:18:43.463130    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0127 11:18:43.463130    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0127 11:18:43.463814    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem (1338 bytes)
	W0127 11:18:43.464239    5908 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956_empty.pem, impossibly tiny 0 bytes
	I0127 11:18:43.464304    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0127 11:18:43.464487    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0127 11:18:43.464487    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0127 11:18:43.465346    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0127 11:18:43.465948    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem (1708 bytes)
	I0127 11:18:43.466145    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:18:43.466145    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem -> /usr/share/ca-certificates/5956.pem
	I0127 11:18:43.466145    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> /usr/share/ca-certificates/59562.pem
	I0127 11:18:43.466900    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:18:45.585331    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:18:45.586315    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:45.586348    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:18:48.114322    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:18:48.114322    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:48.115040    5908 sshutil.go:53] new ssh client: &{IP:172.29.192.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\id_rsa Username:docker}
	I0127 11:18:48.212093    5908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0127 11:18:48.220405    5908 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0127 11:18:48.262107    5908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0127 11:18:48.269487    5908 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0127 11:18:48.300011    5908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0127 11:18:48.306781    5908 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0127 11:18:48.336555    5908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0127 11:18:48.342391    5908 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0127 11:18:48.377038    5908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0127 11:18:48.382465    5908 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0127 11:18:48.412335    5908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0127 11:18:48.419247    5908 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0127 11:18:48.440851    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:18:48.491100    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 11:18:48.544655    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:18:48.595899    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 11:18:48.639160    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0127 11:18:48.681807    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 11:18:48.736417    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:18:48.780956    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 11:18:48.827494    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:18:48.870321    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem --> /usr/share/ca-certificates/5956.pem (1338 bytes)
	I0127 11:18:48.913969    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem --> /usr/share/ca-certificates/59562.pem (1708 bytes)
	I0127 11:18:48.959350    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0127 11:18:48.992255    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0127 11:18:49.023130    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0127 11:18:49.058398    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0127 11:18:49.095190    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0127 11:18:49.132504    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0127 11:18:49.166443    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0127 11:18:49.210486    5908 ssh_runner.go:195] Run: openssl version
	I0127 11:18:49.229491    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:18:49.261055    5908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:18:49.268095    5908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:18:49.278341    5908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:18:49.297067    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:18:49.325322    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5956.pem && ln -fs /usr/share/ca-certificates/5956.pem /etc/ssl/certs/5956.pem"
	I0127 11:18:49.356829    5908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5956.pem
	I0127 11:18:49.363729    5908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:52 /usr/share/ca-certificates/5956.pem
	I0127 11:18:49.375542    5908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5956.pem
	I0127 11:18:49.396012    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5956.pem /etc/ssl/certs/51391683.0"
	I0127 11:18:49.426299    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59562.pem && ln -fs /usr/share/ca-certificates/59562.pem /etc/ssl/certs/59562.pem"
	I0127 11:18:49.454038    5908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59562.pem
	I0127 11:18:49.460991    5908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:52 /usr/share/ca-certificates/59562.pem
	I0127 11:18:49.469669    5908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59562.pem
	I0127 11:18:49.489462    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59562.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:18:49.523024    5908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:18:49.529707    5908 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 11:18:49.530046    5908 kubeadm.go:934] updating node {m03 172.29.196.110 8443 v1.32.1 docker true true} ...
	I0127 11:18:49.530229    5908 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-011400-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.196.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:ha-011400 Namespace:default APIServerHAVIP:172.29.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 11:18:49.530229    5908 kube-vip.go:115] generating kube-vip config ...
	I0127 11:18:49.535614    5908 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0127 11:18:49.569689    5908 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0127 11:18:49.569794    5908 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.29.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.9
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0127 11:18:49.580005    5908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 11:18:49.598363    5908 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.1': No such file or directory
	
	Initiating transfer...
	I0127 11:18:49.609583    5908 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.1
	I0127 11:18:49.626913    5908 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
	I0127 11:18:49.627047    5908 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet.sha256
	I0127 11:18:49.627112    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl -> /var/lib/minikube/binaries/v1.32.1/kubectl
	I0127 11:18:49.627112    5908 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm.sha256
	I0127 11:18:49.627289    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm -> /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0127 11:18:49.639074    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:18:49.640078    5908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0127 11:18:49.640078    5908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl
	I0127 11:18:49.661571    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet -> /var/lib/minikube/binaries/v1.32.1/kubelet
	I0127 11:18:49.661571    5908 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubectl': No such file or directory
	I0127 11:18:49.661571    5908 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubeadm': No such file or directory
	I0127 11:18:49.661571    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl --> /var/lib/minikube/binaries/v1.32.1/kubectl (57323672 bytes)
	I0127 11:18:49.661571    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm --> /var/lib/minikube/binaries/v1.32.1/kubeadm (70942872 bytes)
	I0127 11:18:49.673541    5908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet
	I0127 11:18:49.747787    5908 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubelet': No such file or directory
	I0127 11:18:49.748057    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet --> /var/lib/minikube/binaries/v1.32.1/kubelet (77398276 bytes)
	I0127 11:18:50.939592    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0127 11:18:50.957571    5908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0127 11:18:50.987122    5908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:18:51.022957    5908 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0127 11:18:51.075819    5908 ssh_runner.go:195] Run: grep 172.29.207.254	control-plane.minikube.internal$ /etc/hosts
	I0127 11:18:51.082289    5908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:18:51.113530    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:18:51.321859    5908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:18:51.353879    5908 host.go:66] Checking if "ha-011400" exists ...
	I0127 11:18:51.354667    5908 start.go:317] joinCluster: &{Name:ha-011400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-011400 Namespace:default APIServerHAVIP:172.
29.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.192.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.195.173 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.29.196.110 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false
istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:18:51.355019    5908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0127 11:18:51.355019    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:18:53.419586    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:18:53.419586    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:53.419586    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:18:55.948211    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:18:55.948211    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:55.948738    5908 sshutil.go:53] new ssh client: &{IP:172.29.192.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\id_rsa Username:docker}
	I0127 11:18:56.182381    5908 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.8272084s)
	I0127 11:18:56.182444    5908 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.29.196.110 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 11:18:56.182517    5908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m39fjp.qb6jxdygv1llgskr --discovery-token-ca-cert-hash sha256:7b53ba02e26824ededdf08178373023b65bda8005ddad46edfe91cb6a3cb8d3f --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-011400-m03 --control-plane --apiserver-advertise-address=172.29.196.110 --apiserver-bind-port=8443"
	I0127 11:19:37.306899    5908 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m39fjp.qb6jxdygv1llgskr --discovery-token-ca-cert-hash sha256:7b53ba02e26824ededdf08178373023b65bda8005ddad46edfe91cb6a3cb8d3f --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-011400-m03 --control-plane --apiserver-advertise-address=172.29.196.110 --apiserver-bind-port=8443": (41.1239539s)
	I0127 11:19:37.306899    5908 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0127 11:19:38.028485    5908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-011400-m03 minikube.k8s.io/updated_at=2025_01_27T11_19_38_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650 minikube.k8s.io/name=ha-011400 minikube.k8s.io/primary=false
	I0127 11:19:38.228063    5908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-011400-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0127 11:19:38.382897    5908 start.go:319] duration metric: took 47.0277407s to joinCluster
	I0127 11:19:38.383568    5908 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.29.196.110 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 11:19:38.384618    5908 config.go:182] Loaded profile config "ha-011400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 11:19:38.386477    5908 out.go:177] * Verifying Kubernetes components...
	I0127 11:19:38.400547    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:19:38.762570    5908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:19:38.800463    5908 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0127 11:19:38.800948    5908 kapi.go:59] client config for ha-011400: &rest.Config{Host:"https://172.29.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-011400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-011400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x301e420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0127 11:19:38.800948    5908 kubeadm.go:483] Overriding stale ClientConfig host https://172.29.207.254:8443 with https://172.29.192.249:8443
	I0127 11:19:38.803364    5908 node_ready.go:35] waiting up to 6m0s for node "ha-011400-m03" to be "Ready" ...
	I0127 11:19:38.803547    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:38.803547    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:38.803598    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:38.803598    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:38.819512    5908 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0127 11:19:39.303981    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:39.303981    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:39.303981    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:39.303981    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:39.309661    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:19:39.803966    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:39.803966    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:39.803966    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:39.803966    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:39.810405    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:19:40.304063    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:40.304063    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:40.304063    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:40.304063    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:40.309590    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:19:40.804461    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:40.804461    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:40.804461    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:40.804461    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:40.808913    5908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 11:19:40.809555    5908 node_ready.go:53] node "ha-011400-m03" has status "Ready":"False"
	I0127 11:19:41.303596    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:41.303596    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:41.303596    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:41.303596    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:41.315883    5908 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0127 11:19:41.803764    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:41.803764    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:41.803764    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:41.803764    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:41.810008    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:19:42.303429    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:42.303429    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:42.303429    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:42.303429    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:42.312327    5908 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0127 11:19:42.805064    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:42.805064    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:42.805064    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:42.805064    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:42.810084    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:19:42.810487    5908 node_ready.go:53] node "ha-011400-m03" has status "Ready":"False"
	I0127 11:19:43.303849    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:43.303849    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:43.303849    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:43.303849    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:43.315192    5908 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0127 11:19:43.804272    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:43.804272    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:43.804272    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:43.804272    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:43.812730    5908 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0127 11:19:44.304498    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:44.304498    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:44.304498    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:44.304498    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:44.310686    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:19:44.803787    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:44.803787    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:44.803787    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:44.803787    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:44.809772    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:19:45.305399    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:45.305458    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:45.305458    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:45.305458    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:45.374520    5908 round_trippers.go:574] Response Status: 200 OK in 69 milliseconds
	I0127 11:19:45.376533    5908 node_ready.go:53] node "ha-011400-m03" has status "Ready":"False"
	I0127 11:19:45.803682    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:45.804168    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:45.804168    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:45.804168    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:45.812576    5908 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0127 11:19:46.305337    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:46.305337    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:46.305337    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:46.305337    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:46.310664    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:19:46.803622    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:46.803622    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:46.803622    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:46.803622    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:46.808629    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:19:47.305048    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:47.305048    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:47.305048    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:47.305048    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:47.310706    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:19:47.803545    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:47.803545    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:47.803545    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:47.803545    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:47.809891    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:19:47.814521    5908 node_ready.go:53] node "ha-011400-m03" has status "Ready":"False"
	I0127 11:19:48.303693    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:48.303693    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:48.303693    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:48.303693    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:48.309415    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:19:48.803953    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:48.803953    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:48.803953    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:48.803953    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:48.809672    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:19:49.304690    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:49.304690    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:49.304690    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:49.304690    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:49.312229    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:19:49.804432    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:49.804432    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:49.804432    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:49.804432    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:49.821104    5908 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0127 11:19:49.823150    5908 node_ready.go:53] node "ha-011400-m03" has status "Ready":"False"
	I0127 11:19:50.303569    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:50.303569    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:50.303569    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:50.303569    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:50.309945    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:19:50.804512    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:50.804860    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:50.804860    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:50.804860    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:50.810278    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:19:51.304460    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:51.304460    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:51.304460    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:51.304460    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:51.312163    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:19:51.804468    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:51.804468    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:51.804468    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:51.804468    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:51.810630    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:19:52.303605    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:52.303605    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:52.303605    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:52.303605    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:52.312356    5908 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0127 11:19:52.313217    5908 node_ready.go:53] node "ha-011400-m03" has status "Ready":"False"
	I0127 11:19:52.804596    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:52.804596    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:52.804596    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:52.804596    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:52.828103    5908 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0127 11:19:53.304153    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:53.304153    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:53.304153    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:53.304153    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:53.309179    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:19:53.804526    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:53.804526    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:53.804526    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:53.804526    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:53.810810    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:19:54.305035    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:54.305035    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:54.305035    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:54.305035    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:54.312428    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:19:54.805332    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:54.805409    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:54.805409    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:54.805409    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:54.811249    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:19:54.811567    5908 node_ready.go:53] node "ha-011400-m03" has status "Ready":"False"
	I0127 11:19:55.304919    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:55.304999    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:55.304999    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:55.304999    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:55.310514    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:19:55.804009    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:55.804009    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:55.804009    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:55.804009    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:55.812796    5908 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0127 11:19:56.304046    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:56.304046    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:56.304046    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:56.304046    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:56.309360    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:19:56.804247    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:56.804247    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:56.804247    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:56.804247    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:56.809025    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:19:57.304547    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:57.304547    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:57.304547    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:57.304547    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:57.311593    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:19:57.312280    5908 node_ready.go:53] node "ha-011400-m03" has status "Ready":"False"
	I0127 11:19:57.804770    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:57.804770    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:57.804770    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:57.804770    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:57.809962    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:19:58.304459    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:58.305046    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:58.305046    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:58.305046    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:58.309238    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:19:58.804626    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:58.804626    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:58.804626    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:58.804626    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:58.809810    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:19:59.303778    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:59.304299    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:59.304299    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:59.304299    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:59.314794    5908 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0127 11:19:59.315531    5908 node_ready.go:53] node "ha-011400-m03" has status "Ready":"False"
	I0127 11:19:59.804536    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:59.804610    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:59.804610    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:59.804610    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:59.810319    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:20:00.305856    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:20:00.305941    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:00.305941    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:00.305941    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:00.313341    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:20:00.804074    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:20:00.804074    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:00.804074    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:00.804074    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:00.809241    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:20:01.304092    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:20:01.304525    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:01.304525    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:01.304525    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:01.313882    5908 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0127 11:20:01.314650    5908 node_ready.go:49] node "ha-011400-m03" has status "Ready":"True"
	I0127 11:20:01.314682    5908 node_ready.go:38] duration metric: took 22.5110841s for node "ha-011400-m03" to be "Ready" ...
	I0127 11:20:01.314682    5908 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:20:01.314838    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods
	I0127 11:20:01.314867    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:01.314867    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:01.314867    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:01.328003    5908 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0127 11:20:01.342489    5908 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-228t7" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:01.342489    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-228t7
	I0127 11:20:01.342489    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:01.342489    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:01.342489    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:01.351583    5908 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0127 11:20:01.353402    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:20:01.353402    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:01.353402    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:01.353402    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:01.357687    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:20:01.358436    5908 pod_ready.go:93] pod "coredns-668d6bf9bc-228t7" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:01.358436    5908 pod_ready.go:82] duration metric: took 15.9462ms for pod "coredns-668d6bf9bc-228t7" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:01.358482    5908 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-8b9xh" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:01.358559    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-8b9xh
	I0127 11:20:01.358599    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:01.358656    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:01.358656    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:01.361821    5908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 11:20:01.362809    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:20:01.363525    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:01.363525    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:01.363525    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:01.367103    5908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 11:20:01.368528    5908 pod_ready.go:93] pod "coredns-668d6bf9bc-8b9xh" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:01.368557    5908 pod_ready.go:82] duration metric: took 10.075ms for pod "coredns-668d6bf9bc-8b9xh" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:01.368557    5908 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:01.368702    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-011400
	I0127 11:20:01.368729    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:01.368729    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:01.368729    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:01.375491    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:20:01.376325    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:20:01.376398    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:01.376398    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:01.376398    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:01.387958    5908 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0127 11:20:01.388814    5908 pod_ready.go:93] pod "etcd-ha-011400" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:01.388866    5908 pod_ready.go:82] duration metric: took 20.2607ms for pod "etcd-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:01.388936    5908 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:01.389070    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-011400-m02
	I0127 11:20:01.389131    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:01.389131    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:01.389131    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:01.403080    5908 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0127 11:20:01.403611    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:20:01.403611    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:01.403611    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:01.403611    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:01.410333    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:20:01.410887    5908 pod_ready.go:93] pod "etcd-ha-011400-m02" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:01.410887    5908 pod_ready.go:82] duration metric: took 21.9508ms for pod "etcd-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:01.410887    5908 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-011400-m03" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:01.504559    5908 request.go:632] Waited for 93.6717ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-011400-m03
	I0127 11:20:01.504559    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-011400-m03
	I0127 11:20:01.504559    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:01.504559    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:01.504559    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:01.511245    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:20:01.704573    5908 request.go:632] Waited for 192.5179ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:20:01.704573    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:20:01.704573    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:01.704573    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:01.704573    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:01.709835    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:20:01.710388    5908 pod_ready.go:93] pod "etcd-ha-011400-m03" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:01.710388    5908 pod_ready.go:82] duration metric: took 299.4987ms for pod "etcd-ha-011400-m03" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:01.710619    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:01.904530    5908 request.go:632] Waited for 193.8001ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-011400
	I0127 11:20:01.904530    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-011400
	I0127 11:20:01.904530    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:01.904530    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:01.904530    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:01.912352    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:20:02.105191    5908 request.go:632] Waited for 191.2307ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:20:02.105191    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:20:02.105191    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:02.105191    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:02.105191    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:02.110692    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:20:02.112177    5908 pod_ready.go:93] pod "kube-apiserver-ha-011400" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:02.112329    5908 pod_ready.go:82] duration metric: took 401.7066ms for pod "kube-apiserver-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:02.112329    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:02.304481    5908 request.go:632] Waited for 192.0174ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-011400-m02
	I0127 11:20:02.304481    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-011400-m02
	I0127 11:20:02.304481    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:02.304481    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:02.304481    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:02.314091    5908 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0127 11:20:02.504998    5908 request.go:632] Waited for 189.9145ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:20:02.505487    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:20:02.505591    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:02.505591    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:02.505591    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:02.511151    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:20:02.512156    5908 pod_ready.go:93] pod "kube-apiserver-ha-011400-m02" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:02.512259    5908 pod_ready.go:82] duration metric: took 399.9251ms for pod "kube-apiserver-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:02.512259    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-011400-m03" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:02.704273    5908 request.go:632] Waited for 192.0123ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-011400-m03
	I0127 11:20:02.704273    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-011400-m03
	I0127 11:20:02.704273    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:02.704273    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:02.704273    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:02.709726    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:20:02.904922    5908 request.go:632] Waited for 193.6838ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:20:02.904922    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:20:02.905288    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:02.905288    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:02.905288    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:02.910395    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:20:02.911006    5908 pod_ready.go:93] pod "kube-apiserver-ha-011400-m03" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:02.911118    5908 pod_ready.go:82] duration metric: took 398.8054ms for pod "kube-apiserver-ha-011400-m03" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:02.911118    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:03.104160    5908 request.go:632] Waited for 192.9377ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-011400
	I0127 11:20:03.104160    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-011400
	I0127 11:20:03.104537    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:03.104537    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:03.104537    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:03.110469    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:20:03.304170    5908 request.go:632] Waited for 192.4881ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:20:03.304170    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:20:03.304170    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:03.304170    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:03.304170    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:03.310524    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:20:03.311315    5908 pod_ready.go:93] pod "kube-controller-manager-ha-011400" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:03.311315    5908 pod_ready.go:82] duration metric: took 400.1922ms for pod "kube-controller-manager-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:03.311315    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:03.504341    5908 request.go:632] Waited for 192.8403ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-011400-m02
	I0127 11:20:03.504924    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-011400-m02
	I0127 11:20:03.504959    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:03.504959    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:03.505002    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:03.509914    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:20:03.704519    5908 request.go:632] Waited for 193.6267ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:20:03.704849    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:20:03.704849    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:03.704849    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:03.704849    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:03.711623    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:20:03.712641    5908 pod_ready.go:93] pod "kube-controller-manager-ha-011400-m02" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:03.712641    5908 pod_ready.go:82] duration metric: took 401.3223ms for pod "kube-controller-manager-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:03.712811    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-011400-m03" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:03.904799    5908 request.go:632] Waited for 191.867ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-011400-m03
	I0127 11:20:03.904799    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-011400-m03
	I0127 11:20:03.904799    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:03.904799    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:03.904799    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:03.914159    5908 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0127 11:20:04.104369    5908 request.go:632] Waited for 188.6075ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:20:04.104760    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:20:04.104760    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:04.104760    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:04.104760    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:04.109110    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:20:04.109662    5908 pod_ready.go:93] pod "kube-controller-manager-ha-011400-m03" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:04.109662    5908 pod_ready.go:82] duration metric: took 396.8471ms for pod "kube-controller-manager-ha-011400-m03" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:04.109662    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4pjv8" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:04.304912    5908 request.go:632] Waited for 195.2482ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4pjv8
	I0127 11:20:04.305329    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4pjv8
	I0127 11:20:04.305329    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:04.305329    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:04.305329    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:04.310771    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:20:04.504737    5908 request.go:632] Waited for 193.0176ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:20:04.504737    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:20:04.504737    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:04.504737    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:04.504737    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:04.519628    5908 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0127 11:20:04.520605    5908 pod_ready.go:93] pod "kube-proxy-4pjv8" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:04.520605    5908 pod_ready.go:82] duration metric: took 410.939ms for pod "kube-proxy-4pjv8" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:04.520605    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hg72m" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:04.705917    5908 request.go:632] Waited for 185.3096ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg72m
	I0127 11:20:04.706419    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg72m
	I0127 11:20:04.706466    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:04.706466    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:04.706466    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:04.712526    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:20:04.904658    5908 request.go:632] Waited for 191.1289ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:20:04.904658    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:20:04.904658    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:04.904658    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:04.904658    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:04.910591    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:20:04.911496    5908 pod_ready.go:93] pod "kube-proxy-hg72m" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:04.911603    5908 pod_ready.go:82] duration metric: took 390.9943ms for pod "kube-proxy-hg72m" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:04.911603    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x52km" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:05.104818    5908 request.go:632] Waited for 193.2123ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x52km
	I0127 11:20:05.104818    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x52km
	I0127 11:20:05.104818    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:05.104818    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:05.104818    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:05.110849    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:20:05.305207    5908 request.go:632] Waited for 193.0108ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:20:05.305207    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:20:05.305672    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:05.305672    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:05.305672    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:05.310840    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:20:05.311541    5908 pod_ready.go:93] pod "kube-proxy-x52km" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:05.311541    5908 pod_ready.go:82] duration metric: took 399.9337ms for pod "kube-proxy-x52km" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:05.311541    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:05.505053    5908 request.go:632] Waited for 193.5096ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-011400
	I0127 11:20:05.505053    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-011400
	I0127 11:20:05.505053    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:05.505053    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:05.505053    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:05.510263    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:20:05.704757    5908 request.go:632] Waited for 192.8756ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:20:05.704757    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:20:05.704757    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:05.705265    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:05.705265    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:05.710676    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:20:05.711720    5908 pod_ready.go:93] pod "kube-scheduler-ha-011400" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:05.711787    5908 pod_ready.go:82] duration metric: took 400.2416ms for pod "kube-scheduler-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:05.711787    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:05.904640    5908 request.go:632] Waited for 192.7565ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-011400-m02
	I0127 11:20:05.904640    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-011400-m02
	I0127 11:20:05.904640    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:05.904640    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:05.904640    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:05.911198    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:20:06.104479    5908 request.go:632] Waited for 192.5249ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:20:06.104479    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:20:06.104479    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:06.104479    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:06.104479    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:06.110487    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:20:06.111180    5908 pod_ready.go:93] pod "kube-scheduler-ha-011400-m02" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:06.111180    5908 pod_ready.go:82] duration metric: took 399.3887ms for pod "kube-scheduler-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:06.111180    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-011400-m03" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:06.305002    5908 request.go:632] Waited for 193.8203ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-011400-m03
	I0127 11:20:06.305002    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-011400-m03
	I0127 11:20:06.305002    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:06.305002    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:06.305002    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:06.312376    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:20:06.505014    5908 request.go:632] Waited for 191.8398ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:20:06.505014    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:20:06.505014    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:06.505014    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:06.505014    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:06.511220    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:20:06.511946    5908 pod_ready.go:93] pod "kube-scheduler-ha-011400-m03" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:06.512042    5908 pod_ready.go:82] duration metric: took 400.8579ms for pod "kube-scheduler-ha-011400-m03" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:06.512042    5908 pod_ready.go:39] duration metric: took 5.1972597s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:20:06.512042    5908 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:20:06.522412    5908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:20:06.549068    5908 api_server.go:72] duration metric: took 28.1651244s to wait for apiserver process to appear ...
	I0127 11:20:06.549140    5908 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:20:06.549140    5908 api_server.go:253] Checking apiserver healthz at https://172.29.192.249:8443/healthz ...
	I0127 11:20:06.562145    5908 api_server.go:279] https://172.29.192.249:8443/healthz returned 200:
	ok
	I0127 11:20:06.562306    5908 round_trippers.go:463] GET https://172.29.192.249:8443/version
	I0127 11:20:06.562379    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:06.562406    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:06.562418    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:06.564037    5908 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0127 11:20:06.564167    5908 api_server.go:141] control plane version: v1.32.1
	I0127 11:20:06.564167    5908 api_server.go:131] duration metric: took 15.0274ms to wait for apiserver health ...
	I0127 11:20:06.564167    5908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:20:06.704449    5908 request.go:632] Waited for 140.2807ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods
	I0127 11:20:06.704917    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods
	I0127 11:20:06.704917    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:06.704917    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:06.704917    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:06.714722    5908 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0127 11:20:06.724819    5908 system_pods.go:59] 24 kube-system pods found
	I0127 11:20:06.724819    5908 system_pods.go:61] "coredns-668d6bf9bc-228t7" [ac40dfec-9e9f-4414-9259-a7dadfb2c93d] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "coredns-668d6bf9bc-8b9xh" [647a1e55-d5ce-4f2b-933f-8caf13d7463b] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "etcd-ha-011400" [90238c1c-70b2-47e8-9bab-49f2334ca4b3] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "etcd-ha-011400-m02" [fcda2776-bc47-47af-948a-94e549a41fec] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "etcd-ha-011400-m03" [2e852046-3be3-4615-a27f-0ec1a5673416] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kindnet-fs97j" [d480fa1c-808e-4c5d-818e-26281dca23d4] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kindnet-ll5br" [6a2a0fea-258a-4593-8445-398f37e379e4] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kindnet-mg445" [37787d9b-44c4-4e83-8d2c-e67333301fd1] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-apiserver-ha-011400" [7bda282c-7cb1-46f1-9bb8-366bc992aaed] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-apiserver-ha-011400-m02" [8e5dcd2c-fbca-473d-8aa2-70e7fb8866c7] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-apiserver-ha-011400-m03" [80fe2bca-85bb-4211-8792-5d59b5dab513] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-controller-manager-ha-011400" [1b8e425d-03da-4d95-86e5-e1e6f15b64bd] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-controller-manager-ha-011400-m02" [c20cfbe1-337f-462f-968f-c19741634ac4] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-controller-manager-ha-011400-m03" [ee7a8965-3fd5-41ee-980e-896aa7293038] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-proxy-4pjv8" [c0b28c82-50ac-4021-949d-75883580a018] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-proxy-hg72m" [dc860339-d169-452b-9621-170ae73c7a5e] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-proxy-x52km" [0a6cc7f2-2b15-4db1-b5fb-d6448d4bd295] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-scheduler-ha-011400" [35220ede-c59f-4d24-88c5-728088af2abf] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-scheduler-ha-011400-m02" [250614b6-0c08-4e8a-a080-58253b81d4f7] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-scheduler-ha-011400-m03" [ef2c825c-f959-4df5-afa0-f8e34a48aadf] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-vip-ha-011400" [31c47527-c1fe-4064-bcb4-faffcedab1f4] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-vip-ha-011400-m02" [1e3bf93b-caab-4f37-a8bd-36f0ad76eb4c] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-vip-ha-011400-m03" [64122fe5-f88f-430b-8e9b-e06e18929823] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "storage-provisioner" [2755d063-0183-41c1-9fe8-e533017aef39] Running
	I0127 11:20:06.724819    5908 system_pods.go:74] duration metric: took 160.6507ms to wait for pod list to return data ...
	I0127 11:20:06.724819    5908 default_sa.go:34] waiting for default service account to be created ...
	I0127 11:20:06.904948    5908 request.go:632] Waited for 179.1715ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/default/serviceaccounts
	I0127 11:20:06.905447    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/default/serviceaccounts
	I0127 11:20:06.905447    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:06.905447    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:06.905447    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:06.911450    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:20:06.911450    5908 default_sa.go:45] found service account: "default"
	I0127 11:20:06.911450    5908 default_sa.go:55] duration metric: took 186.6289ms for default service account to be created ...
	I0127 11:20:06.911450    5908 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 11:20:07.104790    5908 request.go:632] Waited for 193.3378ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods
	I0127 11:20:07.104790    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods
	I0127 11:20:07.104790    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:07.104790    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:07.105223    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:07.114365    5908 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0127 11:20:07.128330    5908 system_pods.go:87] 24 kube-system pods found
	I0127 11:20:07.128330    5908 system_pods.go:105] "coredns-668d6bf9bc-228t7" [ac40dfec-9e9f-4414-9259-a7dadfb2c93d] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "coredns-668d6bf9bc-8b9xh" [647a1e55-d5ce-4f2b-933f-8caf13d7463b] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "etcd-ha-011400" [90238c1c-70b2-47e8-9bab-49f2334ca4b3] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "etcd-ha-011400-m02" [fcda2776-bc47-47af-948a-94e549a41fec] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "etcd-ha-011400-m03" [2e852046-3be3-4615-a27f-0ec1a5673416] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kindnet-fs97j" [d480fa1c-808e-4c5d-818e-26281dca23d4] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kindnet-ll5br" [6a2a0fea-258a-4593-8445-398f37e379e4] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kindnet-mg445" [37787d9b-44c4-4e83-8d2c-e67333301fd1] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-apiserver-ha-011400" [7bda282c-7cb1-46f1-9bb8-366bc992aaed] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-apiserver-ha-011400-m02" [8e5dcd2c-fbca-473d-8aa2-70e7fb8866c7] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-apiserver-ha-011400-m03" [80fe2bca-85bb-4211-8792-5d59b5dab513] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-controller-manager-ha-011400" [1b8e425d-03da-4d95-86e5-e1e6f15b64bd] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-controller-manager-ha-011400-m02" [c20cfbe1-337f-462f-968f-c19741634ac4] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-controller-manager-ha-011400-m03" [ee7a8965-3fd5-41ee-980e-896aa7293038] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-proxy-4pjv8" [c0b28c82-50ac-4021-949d-75883580a018] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-proxy-hg72m" [dc860339-d169-452b-9621-170ae73c7a5e] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-proxy-x52km" [0a6cc7f2-2b15-4db1-b5fb-d6448d4bd295] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-scheduler-ha-011400" [35220ede-c59f-4d24-88c5-728088af2abf] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-scheduler-ha-011400-m02" [250614b6-0c08-4e8a-a080-58253b81d4f7] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-scheduler-ha-011400-m03" [ef2c825c-f959-4df5-afa0-f8e34a48aadf] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-vip-ha-011400" [31c47527-c1fe-4064-bcb4-faffcedab1f4] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-vip-ha-011400-m02" [1e3bf93b-caab-4f37-a8bd-36f0ad76eb4c] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-vip-ha-011400-m03" [64122fe5-f88f-430b-8e9b-e06e18929823] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "storage-provisioner" [2755d063-0183-41c1-9fe8-e533017aef39] Running
	I0127 11:20:07.128330    5908 system_pods.go:147] duration metric: took 216.877ms to wait for k8s-apps to be running ...
	I0127 11:20:07.128330    5908 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 11:20:07.142916    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:20:07.170893    5908 system_svc.go:56] duration metric: took 42.5634ms WaitForService to wait for kubelet
	I0127 11:20:07.171013    5908 kubeadm.go:582] duration metric: took 28.7869977s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 11:20:07.171013    5908 node_conditions.go:102] verifying NodePressure condition ...
	I0127 11:20:07.304264    5908 request.go:632] Waited for 133.1263ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes
	I0127 11:20:07.304264    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes
	I0127 11:20:07.304264    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:07.304264    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:07.304264    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:07.313973    5908 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0127 11:20:07.315325    5908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 11:20:07.315325    5908 node_conditions.go:123] node cpu capacity is 2
	I0127 11:20:07.315325    5908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 11:20:07.315325    5908 node_conditions.go:123] node cpu capacity is 2
	I0127 11:20:07.315325    5908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 11:20:07.315325    5908 node_conditions.go:123] node cpu capacity is 2
	I0127 11:20:07.315325    5908 node_conditions.go:105] duration metric: took 144.3105ms to run NodePressure ...
	I0127 11:20:07.315325    5908 start.go:241] waiting for startup goroutines ...
	I0127 11:20:07.315325    5908 start.go:255] writing updated cluster config ...
	I0127 11:20:07.327405    5908 ssh_runner.go:195] Run: rm -f paused
	I0127 11:20:07.468457    5908 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 11:20:07.473338    5908 out.go:177] * Done! kubectl is now configured to use "ha-011400" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jan 27 11:12:32 ha-011400 dockerd[1448]: time="2025-01-27T11:12:32.791208816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 11:12:32 ha-011400 dockerd[1448]: time="2025-01-27T11:12:32.843556208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 27 11:12:32 ha-011400 dockerd[1448]: time="2025-01-27T11:12:32.843650809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 27 11:12:32 ha-011400 dockerd[1448]: time="2025-01-27T11:12:32.843671409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 11:12:32 ha-011400 dockerd[1448]: time="2025-01-27T11:12:32.843785910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 11:12:33 ha-011400 cri-dockerd[1342]: time="2025-01-27T11:12:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1935c738005cb77f13f750e1b189a2c871075e21c55639538224577889f20a82/resolv.conf as [nameserver 172.29.192.1]"
	Jan 27 11:12:33 ha-011400 cri-dockerd[1342]: time="2025-01-27T11:12:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3a60197906090b50c5485229f65e2090b0aa01f0f43bf2dd514c730b4ce5896f/resolv.conf as [nameserver 172.29.192.1]"
	Jan 27 11:12:33 ha-011400 dockerd[1448]: time="2025-01-27T11:12:33.453621263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 27 11:12:33 ha-011400 dockerd[1448]: time="2025-01-27T11:12:33.453836365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 27 11:12:33 ha-011400 dockerd[1448]: time="2025-01-27T11:12:33.454538173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 11:12:33 ha-011400 dockerd[1448]: time="2025-01-27T11:12:33.454833276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 11:12:33 ha-011400 dockerd[1448]: time="2025-01-27T11:12:33.500849580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 27 11:12:33 ha-011400 dockerd[1448]: time="2025-01-27T11:12:33.501209184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 27 11:12:33 ha-011400 dockerd[1448]: time="2025-01-27T11:12:33.501337286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 11:12:33 ha-011400 dockerd[1448]: time="2025-01-27T11:12:33.502542999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 11:20:45 ha-011400 dockerd[1448]: time="2025-01-27T11:20:45.727347385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 27 11:20:45 ha-011400 dockerd[1448]: time="2025-01-27T11:20:45.727449685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 27 11:20:45 ha-011400 dockerd[1448]: time="2025-01-27T11:20:45.727571686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 11:20:45 ha-011400 dockerd[1448]: time="2025-01-27T11:20:45.727782687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 11:20:45 ha-011400 cri-dockerd[1342]: time="2025-01-27T11:20:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0051fd728cd4db3fa1d459f6a64f0cf7abc9f0dbeaaee17684f20afab815f6ec/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jan 27 11:20:47 ha-011400 cri-dockerd[1342]: time="2025-01-27T11:20:47Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jan 27 11:20:47 ha-011400 dockerd[1448]: time="2025-01-27T11:20:47.743359503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 27 11:20:47 ha-011400 dockerd[1448]: time="2025-01-27T11:20:47.743524005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 27 11:20:47 ha-011400 dockerd[1448]: time="2025-01-27T11:20:47.743602906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 11:20:47 ha-011400 dockerd[1448]: time="2025-01-27T11:20:47.744276413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e9983636c7dcf       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   0051fd728cd4d       busybox-58667487b6-68jl6
	bcad71a4f97a9       c69fa2e9cbf5f                                                                                         9 minutes ago        Running             coredns                   0                   3a60197906090       coredns-668d6bf9bc-8b9xh
	f0e3ddbafad83       c69fa2e9cbf5f                                                                                         9 minutes ago        Running             coredns                   0                   1935c738005cb       coredns-668d6bf9bc-228t7
	4b61052edeb8d       6e38f40d628db                                                                                         9 minutes ago        Running             storage-provisioner       0                   eaa6ebb740ba0       storage-provisioner
	2069e52c51e41       kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108              9 minutes ago        Running             kindnet-cni               0                   e24b0a5e38273       kindnet-ll5br
	b57131c4a903e       e29f9c7391fd9                                                                                         9 minutes ago        Running             kube-proxy                0                   0aab982097c2b       kube-proxy-hg72m
	69457ef5aaab5       ghcr.io/kube-vip/kube-vip@sha256:717b8bef2758c10042d64ae7949201ef7f243d928fce265b04e488e844bf9528     9 minutes ago        Running             kube-vip                  0                   9fa384b6dac7d       kube-vip-ha-011400
	dcdc672289089       2b0d6572d062c                                                                                         9 minutes ago        Running             kube-scheduler            0                   2706c9625e77c       kube-scheduler-ha-011400
	198b69006a51b       019ee182b58e2                                                                                         9 minutes ago        Running             kube-controller-manager   0                   cd67bd2b10fab       kube-controller-manager-ha-011400
	3ad7004cc4fef       a9e7e6b294baf                                                                                         9 minutes ago        Running             etcd                      0                   23128693ce80a       etcd-ha-011400
	9bbef2b1e01c4       95c0bda56fc4d                                                                                         9 minutes ago        Running             kube-apiserver            0                   086cfc8d226c5       kube-apiserver-ha-011400
	
	
	==> coredns [bcad71a4f97a] <==
	[INFO] 10.244.0.4:33596 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000255003s
	[INFO] 10.244.0.4:60693 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000289503s
	[INFO] 10.244.2.2:55507 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000394005s
	[INFO] 10.244.2.2:50550 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000063701s
	[INFO] 10.244.2.2:44843 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106402s
	[INFO] 10.244.2.2:51347 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000236902s
	[INFO] 10.244.1.2:35484 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000177102s
	[INFO] 10.244.1.2:51286 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000201402s
	[INFO] 10.244.1.2:49276 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000303403s
	[INFO] 10.244.0.4:32804 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000228302s
	[INFO] 10.244.0.4:45584 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000223003s
	[INFO] 10.244.2.2:47725 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159501s
	[INFO] 10.244.2.2:43405 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000221303s
	[INFO] 10.244.2.2:55745 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000175002s
	[INFO] 10.244.1.2:52662 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000352304s
	[INFO] 10.244.1.2:36689 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000202002s
	[INFO] 10.244.1.2:54398 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000152602s
	[INFO] 10.244.0.4:50709 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000251503s
	[INFO] 10.244.0.4:47310 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000169302s
	[INFO] 10.244.0.4:57342 - 5 "PTR IN 1.192.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000353704s
	[INFO] 10.244.2.2:39323 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163502s
	[INFO] 10.244.2.2:54278 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000253202s
	[INFO] 10.244.2.2:47951 - 5 "PTR IN 1.192.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000176802s
	[INFO] 10.244.1.2:49224 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000312603s
	[INFO] 10.244.1.2:57693 - 5 "PTR IN 1.192.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000393904s
	
	
	==> coredns [f0e3ddbafad8] <==
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40089 - 18739 "HINFO IN 6468488692358095045.1233646566971498252. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.092172501s
	[INFO] 10.244.0.4:53802 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000272203s
	[INFO] 10.244.2.2:45834 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.002445225s
	[INFO] 10.244.1.2:55758 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.001212313s
	[INFO] 10.244.1.2:49496 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.020711714s
	[INFO] 10.244.1.2:59784 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000179402s
	[INFO] 10.244.2.2:54197 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000307603s
	[INFO] 10.244.2.2:47409 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.109862534s
	[INFO] 10.244.2.2:37295 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000193702s
	[INFO] 10.244.2.2:47677 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000246203s
	[INFO] 10.244.1.2:55326 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184102s
	[INFO] 10.244.1.2:39052 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132401s
	[INFO] 10.244.1.2:36947 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000229203s
	[INFO] 10.244.1.2:46183 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000222102s
	[INFO] 10.244.1.2:34533 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000178002s
	[INFO] 10.244.0.4:42338 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000207002s
	[INFO] 10.244.0.4:51375 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000184802s
	[INFO] 10.244.2.2:45212 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092601s
	[INFO] 10.244.1.2:42310 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000222702s
	[INFO] 10.244.0.4:33035 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000632306s
	[INFO] 10.244.2.2:37533 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000110201s
	[INFO] 10.244.1.2:37391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144001s
	[INFO] 10.244.1.2:36153 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000088501s
	
	
	==> describe nodes <==
	Name:               ha-011400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-011400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	                    minikube.k8s.io/name=ha-011400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T11_12_09_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 11:12:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-011400
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 11:21:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 11:21:17 +0000   Mon, 27 Jan 2025 11:12:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 11:21:17 +0000   Mon, 27 Jan 2025 11:12:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 11:21:17 +0000   Mon, 27 Jan 2025 11:12:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 11:21:17 +0000   Mon, 27 Jan 2025 11:12:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.192.249
	  Hostname:    ha-011400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 10a11ade80354ac0997dbbc175cad0bf
	  System UUID:                d8404609-e752-314c-b066-45b46de87e79
	  Boot ID:                    4eb876da-53a3-40a3-9774-960843ee30d1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-68jl6             0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 coredns-668d6bf9bc-228t7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m38s
	  kube-system                 coredns-668d6bf9bc-8b9xh             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m38s
	  kube-system                 etcd-ha-011400                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m44s
	  kube-system                 kindnet-ll5br                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m39s
	  kube-system                 kube-apiserver-ha-011400             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m44s
	  kube-system                 kube-controller-manager-ha-011400    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m44s
	  kube-system                 kube-proxy-hg72m                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m39s
	  kube-system                 kube-scheduler-ha-011400             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m44s
	  kube-system                 kube-vip-ha-011400                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m44s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m37s                  kube-proxy       
	  Normal  NodeHasSufficientPID     9m52s (x7 over 9m52s)  kubelet          Node ha-011400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m52s (x8 over 9m52s)  kubelet          Node ha-011400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m52s (x8 over 9m52s)  kubelet          Node ha-011400 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 9m42s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m42s                  kubelet          Node ha-011400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m42s                  kubelet          Node ha-011400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m42s                  kubelet          Node ha-011400 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m39s                  node-controller  Node ha-011400 event: Registered Node ha-011400 in Controller
	  Normal  NodeReady                9m19s                  kubelet          Node ha-011400 status is now: NodeReady
	  Normal  RegisteredNode           5m58s                  node-controller  Node ha-011400 event: Registered Node ha-011400 in Controller
	  Normal  RegisteredNode           2m8s                   node-controller  Node ha-011400 event: Registered Node ha-011400 in Controller
	
	
	Name:               ha-011400-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-011400-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	                    minikube.k8s.io/name=ha-011400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_01_27T11_15_46_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 11:15:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-011400-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 11:21:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 11:21:18 +0000   Mon, 27 Jan 2025 11:15:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 11:21:18 +0000   Mon, 27 Jan 2025 11:15:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 11:21:18 +0000   Mon, 27 Jan 2025 11:15:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 11:21:18 +0000   Mon, 27 Jan 2025 11:16:07 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.195.173
	  Hostname:    ha-011400-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 a3b338ed17924366a5216d6a6ca57440
	  System UUID:                60f4b9d5-23b7-b341-9c42-534bfb963bdf
	  Boot ID:                    abec3bf6-3ccd-47b8-9b7b-3086f6341ae4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-qwccg                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 etcd-ha-011400-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m8s
	  kube-system                 kindnet-fs97j                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m9s
	  kube-system                 kube-apiserver-ha-011400-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-controller-manager-ha-011400-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-proxy-x52km                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	  kube-system                 kube-scheduler-ha-011400-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-vip-ha-011400-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 6m3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  6m9s (x8 over 6m9s)  kubelet          Node ha-011400-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m9s (x8 over 6m9s)  kubelet          Node ha-011400-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m9s (x7 over 6m9s)  kubelet          Node ha-011400-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m4s                 node-controller  Node ha-011400-m02 event: Registered Node ha-011400-m02 in Controller
	  Normal  RegisteredNode           5m58s                node-controller  Node ha-011400-m02 event: Registered Node ha-011400-m02 in Controller
	  Normal  RegisteredNode           2m8s                 node-controller  Node ha-011400-m02 event: Registered Node ha-011400-m02 in Controller
	
	
	Name:               ha-011400-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-011400-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	                    minikube.k8s.io/name=ha-011400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_01_27T11_19_38_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 11:19:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-011400-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 11:21:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 11:21:03 +0000   Mon, 27 Jan 2025 11:19:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 11:21:03 +0000   Mon, 27 Jan 2025 11:19:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 11:21:03 +0000   Mon, 27 Jan 2025 11:19:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 11:21:03 +0000   Mon, 27 Jan 2025 11:20:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.196.110
	  Hostname:    ha-011400-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 5a5a62e2138b409497c96ddb03eff3c7
	  System UUID:                8cc9855d-0e69-ae4d-8590-9ad632ae48d3
	  Boot ID:                    6a0c2e9e-54a4-4459-8824-fa7071960394
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-fzbr5                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 etcd-ha-011400-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m17s
	  kube-system                 kindnet-mg445                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      2m19s
	  kube-system                 kube-apiserver-ha-011400-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-controller-manager-ha-011400-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-proxy-4pjv8                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-scheduler-ha-011400-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-vip-ha-011400-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m14s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m19s (x8 over 2m19s)  kubelet          Node ha-011400-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m19s (x8 over 2m19s)  kubelet          Node ha-011400-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m19s (x7 over 2m19s)  kubelet          Node ha-011400-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m18s                  node-controller  Node ha-011400-m03 event: Registered Node ha-011400-m03 in Controller
	  Normal  RegisteredNode           2m14s                  node-controller  Node ha-011400-m03 event: Registered Node ha-011400-m03 in Controller
	  Normal  RegisteredNode           2m8s                   node-controller  Node ha-011400-m03 event: Registered Node ha-011400-m03 in Controller
	
	
	==> dmesg <==
	[  +1.823300] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +6.645538] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan27 11:11] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.182918] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[ +30.184632] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[  +0.107311] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.520304] systemd-fstab-generator[1045]: Ignoring "noauto" option for root device
	[  +0.190737] systemd-fstab-generator[1057]: Ignoring "noauto" option for root device
	[  +0.239308] systemd-fstab-generator[1071]: Ignoring "noauto" option for root device
	[  +2.882619] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +0.198053] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.208312] systemd-fstab-generator[1319]: Ignoring "noauto" option for root device
	[  +0.248120] systemd-fstab-generator[1334]: Ignoring "noauto" option for root device
	[ +11.085414] systemd-fstab-generator[1434]: Ignoring "noauto" option for root device
	[  +0.106636] kauditd_printk_skb: 206 callbacks suppressed
	[  +3.750577] systemd-fstab-generator[1702]: Ignoring "noauto" option for root device
	[  +6.277074] systemd-fstab-generator[1849]: Ignoring "noauto" option for root device
	[  +0.102836] kauditd_printk_skb: 74 callbacks suppressed
	[Jan27 11:12] kauditd_printk_skb: 67 callbacks suppressed
	[  +2.351146] systemd-fstab-generator[2367]: Ignoring "noauto" option for root device
	[  +4.969124] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.869016] kauditd_printk_skb: 29 callbacks suppressed
	[Jan27 11:15] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [3ad7004cc4fe] <==
	{"level":"info","ts":"2025-01-27T11:19:35.297668Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"75f79d602e3dd4a"}
	{"level":"info","ts":"2025-01-27T11:19:35.297689Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"dc6b987122e5b030","remote-peer-id":"75f79d602e3dd4a"}
	{"level":"info","ts":"2025-01-27T11:19:35.318113Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"dc6b987122e5b030","remote-peer-id":"75f79d602e3dd4a"}
	{"level":"info","ts":"2025-01-27T11:19:35.353929Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"dc6b987122e5b030","remote-peer-id":"75f79d602e3dd4a"}
	{"level":"info","ts":"2025-01-27T11:19:35.360763Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"dc6b987122e5b030","to":"75f79d602e3dd4a","stream-type":"stream Message"}
	{"level":"info","ts":"2025-01-27T11:19:35.360791Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"dc6b987122e5b030","remote-peer-id":"75f79d602e3dd4a"}
	{"level":"warn","ts":"2025-01-27T11:19:35.661178Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"75f79d602e3dd4a","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2025-01-27T11:19:36.661041Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"75f79d602e3dd4a","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2025-01-27T11:19:37.167847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dc6b987122e5b030 switched to configuration voters=(531277241131457866 9344481741465494996 15882956122536390704)"}
	{"level":"info","ts":"2025-01-27T11:19:37.168558Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"80cefac5aa050375","local-member-id":"dc6b987122e5b030"}
	{"level":"info","ts":"2025-01-27T11:19:37.168668Z","caller":"etcdserver/server.go:2018","msg":"applied a configuration change through raft","local-member-id":"dc6b987122e5b030","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"75f79d602e3dd4a"}
	{"level":"info","ts":"2025-01-27T11:19:43.298623Z","caller":"traceutil/trace.go:171","msg":"trace[650858134] transaction","detail":"{read_only:false; response_revision:1558; number_of_response:1; }","duration":"103.536552ms","start":"2025-01-27T11:19:43.195070Z","end":"2025-01-27T11:19:43.298606Z","steps":["trace[650858134] 'process raft request'  (duration: 103.285751ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T11:19:45.019553Z","caller":"etcdserver/raft.go:426","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"81ae44a27a2ca5d4","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"717.67µs"}
	{"level":"warn","ts":"2025-01-27T11:19:45.019726Z","caller":"etcdserver/raft.go:426","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"75f79d602e3dd4a","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"894.171µs"}
	{"level":"info","ts":"2025-01-27T11:19:45.032153Z","caller":"traceutil/trace.go:171","msg":"trace[381426560] transaction","detail":"{read_only:false; response_revision:1562; number_of_response:1; }","duration":"203.802886ms","start":"2025-01-27T11:19:44.828133Z","end":"2025-01-27T11:19:45.031936Z","steps":["trace[381426560] 'process raft request'  (duration: 203.497984ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T11:20:45.164161Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"145.701375ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/busybox-58667487b6-ktdvr\" limit:1 ","response":"range_response_count:1 size:2238"}
	{"level":"info","ts":"2025-01-27T11:20:45.164249Z","caller":"traceutil/trace.go:171","msg":"trace[855451733] range","detail":"{range_begin:/registry/pods/default/busybox-58667487b6-ktdvr; range_end:; response_count:1; response_revision:1802; }","duration":"145.812576ms","start":"2025-01-27T11:20:45.018422Z","end":"2025-01-27T11:20:45.164235Z","steps":["trace[855451733] 'agreement among raft nodes before linearized reading'  (duration: 145.663475ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T11:20:45.164512Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.030777ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/busybox-58667487b6-9vldw\" limit:1 ","response":"range_response_count:1 size:2238"}
	{"level":"info","ts":"2025-01-27T11:20:45.164556Z","caller":"traceutil/trace.go:171","msg":"trace[1390534249] range","detail":"{range_begin:/registry/pods/default/busybox-58667487b6-9vldw; range_end:; response_count:1; response_revision:1802; }","duration":"146.142477ms","start":"2025-01-27T11:20:45.018406Z","end":"2025-01-27T11:20:45.164548Z","steps":["trace[1390534249] 'agreement among raft nodes before linearized reading'  (duration: 146.008177ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T11:20:45.164694Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.274378ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/busybox-58667487b6-xntc6\" limit:1 ","response":"range_response_count:1 size:2238"}
	{"level":"info","ts":"2025-01-27T11:20:45.164723Z","caller":"traceutil/trace.go:171","msg":"trace[1645301820] range","detail":"{range_begin:/registry/pods/default/busybox-58667487b6-xntc6; range_end:; response_count:1; response_revision:1802; }","duration":"146.318278ms","start":"2025-01-27T11:20:45.018399Z","end":"2025-01-27T11:20:45.164717Z","steps":["trace[1645301820] 'agreement among raft nodes before linearized reading'  (duration: 146.249478ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T11:20:45.164824Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.43628ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/busybox-58667487b6-c2rc2\" limit:1 ","response":"range_response_count:1 size:2238"}
	{"level":"info","ts":"2025-01-27T11:20:45.164871Z","caller":"traceutil/trace.go:171","msg":"trace[58337675] range","detail":"{range_begin:/registry/pods/default/busybox-58667487b6-c2rc2; range_end:; response_count:1; response_revision:1802; }","duration":"146.49128ms","start":"2025-01-27T11:20:45.018373Z","end":"2025-01-27T11:20:45.164864Z","steps":["trace[58337675] 'agreement among raft nodes before linearized reading'  (duration: 146.43008ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T11:20:45.632313Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.464551ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12695810796518671919 > lease_revoke:<id:25d494a7797cd4eb>","response":"size:29"}
	{"level":"info","ts":"2025-01-27T11:20:45.633614Z","caller":"traceutil/trace.go:171","msg":"trace[1183476360] transaction","detail":"{read_only:false; response_revision:1824; number_of_response:1; }","duration":"142.974761ms","start":"2025-01-27T11:20:45.490616Z","end":"2025-01-27T11:20:45.633591Z","steps":["trace[1183476360] 'process raft request'  (duration: 142.440458ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:21:51 up 11 min,  0 users,  load average: 0.50, 0.35, 0.20
	Linux ha-011400 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2069e52c51e4] <==
	I0127 11:21:10.831417       1 main.go:324] Node ha-011400-m03 has CIDR [10.244.2.0/24] 
	I0127 11:21:20.823239       1 main.go:297] Handling node with IPs: map[172.29.192.249:{}]
	I0127 11:21:20.823292       1 main.go:301] handling current node
	I0127 11:21:20.823310       1 main.go:297] Handling node with IPs: map[172.29.195.173:{}]
	I0127 11:21:20.823322       1 main.go:324] Node ha-011400-m02 has CIDR [10.244.1.0/24] 
	I0127 11:21:20.823863       1 main.go:297] Handling node with IPs: map[172.29.196.110:{}]
	I0127 11:21:20.823879       1 main.go:324] Node ha-011400-m03 has CIDR [10.244.2.0/24] 
	I0127 11:21:30.831925       1 main.go:297] Handling node with IPs: map[172.29.196.110:{}]
	I0127 11:21:30.832158       1 main.go:324] Node ha-011400-m03 has CIDR [10.244.2.0/24] 
	I0127 11:21:30.832594       1 main.go:297] Handling node with IPs: map[172.29.192.249:{}]
	I0127 11:21:30.832670       1 main.go:301] handling current node
	I0127 11:21:30.832733       1 main.go:297] Handling node with IPs: map[172.29.195.173:{}]
	I0127 11:21:30.832765       1 main.go:324] Node ha-011400-m02 has CIDR [10.244.1.0/24] 
	I0127 11:21:40.832104       1 main.go:297] Handling node with IPs: map[172.29.192.249:{}]
	I0127 11:21:40.832247       1 main.go:301] handling current node
	I0127 11:21:40.832270       1 main.go:297] Handling node with IPs: map[172.29.195.173:{}]
	I0127 11:21:40.832282       1 main.go:324] Node ha-011400-m02 has CIDR [10.244.1.0/24] 
	I0127 11:21:40.832737       1 main.go:297] Handling node with IPs: map[172.29.196.110:{}]
	I0127 11:21:40.832837       1 main.go:324] Node ha-011400-m03 has CIDR [10.244.2.0/24] 
	I0127 11:21:50.823059       1 main.go:297] Handling node with IPs: map[172.29.196.110:{}]
	I0127 11:21:50.823103       1 main.go:324] Node ha-011400-m03 has CIDR [10.244.2.0/24] 
	I0127 11:21:50.823326       1 main.go:297] Handling node with IPs: map[172.29.192.249:{}]
	I0127 11:21:50.823338       1 main.go:301] handling current node
	I0127 11:21:50.823371       1 main.go:297] Handling node with IPs: map[172.29.195.173:{}]
	I0127 11:21:50.823377       1 main.go:324] Node ha-011400-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [9bbef2b1e01c] <==
	I0127 11:12:08.048803       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0127 11:12:08.087963       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0127 11:12:08.110529       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0127 11:12:11.736262       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0127 11:12:11.814228       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0127 11:19:32.469733       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0127 11:19:32.469782       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 173.901µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0127 11:19:32.471630       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0127 11:19:32.473520       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0127 11:19:32.475652       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="43.958334ms" method="PATCH" path="/api/v1/namespaces/default/events/ha-011400-m03.181e88aa5f0c48a1" result=null
	E0127 11:20:52.520955       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50918: use of closed network connection
	E0127 11:20:53.037825       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50920: use of closed network connection
	E0127 11:20:53.584570       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50922: use of closed network connection
	E0127 11:20:54.162206       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50924: use of closed network connection
	E0127 11:20:54.784350       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50926: use of closed network connection
	E0127 11:20:55.323931       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50928: use of closed network connection
	E0127 11:20:55.829844       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50930: use of closed network connection
	E0127 11:20:56.383232       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50932: use of closed network connection
	E0127 11:20:56.904034       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50934: use of closed network connection
	E0127 11:20:57.853762       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50937: use of closed network connection
	E0127 11:21:08.369828       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50939: use of closed network connection
	E0127 11:21:08.887545       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50942: use of closed network connection
	E0127 11:21:19.384206       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50944: use of closed network connection
	E0127 11:21:19.894124       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50949: use of closed network connection
	E0127 11:21:30.403920       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50951: use of closed network connection
	
	
	==> kube-controller-manager [198b69006a51] <==
	I0127 11:19:38.226947       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400-m03"
	I0127 11:19:38.389565       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400-m03"
	I0127 11:19:41.791753       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400-m03"
	I0127 11:19:42.956852       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400-m03"
	I0127 11:19:43.027097       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400-m03"
	I0127 11:20:01.156637       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400-m03"
	I0127 11:20:01.210260       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400-m03"
	I0127 11:20:01.261557       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400-m03"
	I0127 11:20:02.466276       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400-m03"
	I0127 11:20:44.472155       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="148.993493ms"
	I0127 11:20:44.553873       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="78.619518ms"
	I0127 11:20:44.554268       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="111.2µs"
	I0127 11:20:45.010793       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="449.888294ms"
	I0127 11:20:45.425801       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="414.875208ms"
	I0127 11:20:45.481595       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="55.731597ms"
	I0127 11:20:45.485655       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="71.7µs"
	I0127 11:20:47.780201       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="50.415323ms"
	I0127 11:20:47.780265       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="32.8µs"
	I0127 11:20:48.181724       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="19.2943ms"
	I0127 11:20:48.183323       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="173.901µs"
	I0127 11:20:49.768616       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="15.835864ms"
	I0127 11:20:49.769252       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="63.801µs"
	I0127 11:21:03.674571       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400-m03"
	I0127 11:21:17.961302       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400"
	I0127 11:21:18.661228       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400-m02"
	
	
	==> kube-proxy [b57131c4a903] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 11:12:13.106799       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 11:12:13.120320       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.192.249"]
	E0127 11:12:13.120586       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 11:12:13.199607       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 11:12:13.199770       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 11:12:13.199818       1 server_linux.go:170] "Using iptables Proxier"
	I0127 11:12:13.205965       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 11:12:13.207246       1 server.go:497] "Version info" version="v1.32.1"
	I0127 11:12:13.207367       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 11:12:13.211546       1 config.go:199] "Starting service config controller"
	I0127 11:12:13.211584       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 11:12:13.211618       1 config.go:105] "Starting endpoint slice config controller"
	I0127 11:12:13.211624       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 11:12:13.212361       1 config.go:329] "Starting node config controller"
	I0127 11:12:13.212392       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 11:12:13.311857       1 shared_informer.go:320] Caches are synced for service config
	I0127 11:12:13.311982       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 11:12:13.313159       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [dcdc67228908] <==
	W0127 11:12:04.013552       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 11:12:04.013621       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:12:04.206947       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0127 11:12:04.207091       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:12:04.227891       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 11:12:04.227920       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 11:12:04.255674       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 11:12:04.255703       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 11:12:04.334283       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 11:12:04.334534       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 11:12:04.365853       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 11:12:04.365982       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 11:12:06.920399       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0127 11:19:31.837856       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-4pjv8\": pod kube-proxy-4pjv8 is already assigned to node \"ha-011400-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-4pjv8" node="ha-011400-m03"
	E0127 11:19:31.844329       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod c0b28c82-50ac-4021-949d-75883580a018(kube-system/kube-proxy-4pjv8) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-4pjv8"
	E0127 11:19:31.844428       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-4pjv8\": pod kube-proxy-4pjv8 is already assigned to node \"ha-011400-m03\"" pod="kube-system/kube-proxy-4pjv8"
	E0127 11:19:31.837948       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mg445\": pod kindnet-mg445 is already assigned to node \"ha-011400-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-mg445" node="ha-011400-m03"
	E0127 11:19:31.844744       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod 37787d9b-44c4-4e83-8d2c-e67333301fd1(kube-system/kindnet-mg445) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-mg445"
	E0127 11:19:31.844924       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mg445\": pod kindnet-mg445 is already assigned to node \"ha-011400-m03\"" pod="kube-system/kindnet-mg445"
	I0127 11:19:31.845083       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mg445" node="ha-011400-m03"
	I0127 11:19:31.844630       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-4pjv8" node="ha-011400-m03"
	E0127 11:20:44.421575       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-58667487b6-fzbr5\": pod busybox-58667487b6-fzbr5 is already assigned to node \"ha-011400-m03\"" plugin="DefaultBinder" pod="default/busybox-58667487b6-fzbr5" node="ha-011400-m03"
	E0127 11:20:44.422211       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod f0d62b04-6b3f-4c90-8b5a-d5dc2e7b527c(default/busybox-58667487b6-fzbr5) wasn't assumed so cannot be forgotten" pod="default/busybox-58667487b6-fzbr5"
	E0127 11:20:44.422616       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-58667487b6-fzbr5\": pod busybox-58667487b6-fzbr5 is already assigned to node \"ha-011400-m03\"" pod="default/busybox-58667487b6-fzbr5"
	I0127 11:20:44.422706       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-58667487b6-fzbr5" node="ha-011400-m03"
	
	
	==> kubelet <==
	Jan 27 11:17:08 ha-011400 kubelet[2374]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 11:17:08 ha-011400 kubelet[2374]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 11:17:08 ha-011400 kubelet[2374]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 11:17:08 ha-011400 kubelet[2374]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 11:18:08 ha-011400 kubelet[2374]: E0127 11:18:08.274870    2374 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 11:18:08 ha-011400 kubelet[2374]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 11:18:08 ha-011400 kubelet[2374]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 11:18:08 ha-011400 kubelet[2374]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 11:18:08 ha-011400 kubelet[2374]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 11:19:08 ha-011400 kubelet[2374]: E0127 11:19:08.275371    2374 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 11:19:08 ha-011400 kubelet[2374]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 11:19:08 ha-011400 kubelet[2374]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 11:19:08 ha-011400 kubelet[2374]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 11:19:08 ha-011400 kubelet[2374]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 11:20:08 ha-011400 kubelet[2374]: E0127 11:20:08.277374    2374 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 11:20:08 ha-011400 kubelet[2374]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 11:20:08 ha-011400 kubelet[2374]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 11:20:08 ha-011400 kubelet[2374]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 11:20:08 ha-011400 kubelet[2374]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 11:20:44 ha-011400 kubelet[2374]: I0127 11:20:44.677913    2374 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wv8nl\" (UniqueName: \"kubernetes.io/projected/272bb8db-6c80-4149-b6cd-4e68fe388069-kube-api-access-wv8nl\") pod \"busybox-58667487b6-68jl6\" (UID: \"272bb8db-6c80-4149-b6cd-4e68fe388069\") " pod="default/busybox-58667487b6-68jl6"
	Jan 27 11:21:08 ha-011400 kubelet[2374]: E0127 11:21:08.276382    2374 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 11:21:08 ha-011400 kubelet[2374]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 11:21:08 ha-011400 kubelet[2374]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 11:21:08 ha-011400 kubelet[2374]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 11:21:08 ha-011400 kubelet[2374]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-011400 -n ha-011400
E0127 11:22:03.994228    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-011400 -n ha-011400: (12.2495259s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-011400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (68.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (50.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Non-zero exit: out/minikube-windows-amd64.exe profile list --output json: exit status 1 (15.5133367s)
ha_test.go:394: failed to list profiles with json format. args "out/minikube-windows-amd64.exe profile list --output json": exit status 1
ha_test.go:400: failed to decode json from profile list: args "out/minikube-windows-amd64.exe profile list --output json": unexpected end of JSON input
ha_test.go:413: expected the json of 'profile list' to include "ha-011400" but got *""*. args: "out/minikube-windows-amd64.exe profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-011400 -n ha-011400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-011400 -n ha-011400: (12.1647605s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 logs -n 25: (8.6602259s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                            |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| cp      | ha-011400 cp ha-011400-m03:/home/docker/cp-test.txt                                                                       | ha-011400 | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:33 UTC | 27 Jan 25 11:33 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1959896975\001\cp-test_ha-011400-m03.txt |           |                   |         |                     |                     |
	| ssh     | ha-011400 ssh -n                                                                                                          | ha-011400 | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:33 UTC | 27 Jan 25 11:33 UTC |
	|         | ha-011400-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-011400 cp ha-011400-m03:/home/docker/cp-test.txt                                                                       | ha-011400 | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:33 UTC | 27 Jan 25 11:33 UTC |
	|         | ha-011400:/home/docker/cp-test_ha-011400-m03_ha-011400.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-011400 ssh -n                                                                                                          | ha-011400 | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:33 UTC | 27 Jan 25 11:33 UTC |
	|         | ha-011400-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-011400 ssh -n ha-011400 sudo cat                                                                                       | ha-011400 | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:33 UTC | 27 Jan 25 11:34 UTC |
	|         | /home/docker/cp-test_ha-011400-m03_ha-011400.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-011400 cp ha-011400-m03:/home/docker/cp-test.txt                                                                       | ha-011400 | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:34 UTC | 27 Jan 25 11:34 UTC |
	|         | ha-011400-m02:/home/docker/cp-test_ha-011400-m03_ha-011400-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-011400 ssh -n                                                                                                          | ha-011400 | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:34 UTC | 27 Jan 25 11:34 UTC |
	|         | ha-011400-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-011400 ssh -n ha-011400-m02 sudo cat                                                                                   | ha-011400 | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:34 UTC | 27 Jan 25 11:34 UTC |
	|         | /home/docker/cp-test_ha-011400-m03_ha-011400-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-011400 cp ha-011400-m03:/home/docker/cp-test.txt                                                                       | ha-011400 | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:34 UTC | 27 Jan 25 11:34 UTC |
	|         | ha-011400-m04:/home/docker/cp-test_ha-011400-m03_ha-011400-m04.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-011400 ssh -n                                                                                                          | ha-011400 | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:34 UTC | 27 Jan 25 11:35 UTC |
	|         | ha-011400-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-011400 ssh -n ha-011400-m04 sudo cat                                                                                   | ha-011400 | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:35 UTC | 27 Jan 25 11:35 UTC |
	|         | /home/docker/cp-test_ha-011400-m03_ha-011400-m04.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-011400 cp testdata\cp-test.txt                                                                                         | ha-011400 | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:35 UTC | 27 Jan 25 11:35 UTC |
	|         | ha-011400-m04:/home/docker/cp-test.txt                                                                                    |           |                   |         |                     |                     |
	| ssh     | ha-011400 ssh -n                                                                                                          | ha-011400 | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:35 UTC | 27 Jan 25 11:35 UTC |
	|         | ha-011400-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-011400 cp ha-011400-m04:/home/docker/cp-test.txt                                                                       | ha-011400 | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:35 UTC | 27 Jan 25 11:35 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1959896975\001\cp-test_ha-011400-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-011400 ssh -n                                                                                                          | ha-011400 | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:35 UTC | 27 Jan 25 11:35 UTC |
	|         | ha-011400-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-011400 cp ha-011400-m04:/home/docker/cp-test.txt                                                                       | ha-011400 | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:35 UTC | 27 Jan 25 11:36 UTC |
	|         | ha-011400:/home/docker/cp-test_ha-011400-m04_ha-011400.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-011400 ssh -n                                                                                                          | ha-011400 | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:36 UTC | 27 Jan 25 11:36 UTC |
	|         | ha-011400-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-011400 ssh -n ha-011400 sudo cat                                                                                       | ha-011400 | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:36 UTC | 27 Jan 25 11:36 UTC |
	|         | /home/docker/cp-test_ha-011400-m04_ha-011400.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-011400 cp ha-011400-m04:/home/docker/cp-test.txt                                                                       | ha-011400 | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:36 UTC | 27 Jan 25 11:36 UTC |
	|         | ha-011400-m02:/home/docker/cp-test_ha-011400-m04_ha-011400-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-011400 ssh -n                                                                                                          | ha-011400 | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:36 UTC | 27 Jan 25 11:36 UTC |
	|         | ha-011400-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-011400 ssh -n ha-011400-m02 sudo cat                                                                                   | ha-011400 | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:36 UTC | 27 Jan 25 11:37 UTC |
	|         | /home/docker/cp-test_ha-011400-m04_ha-011400-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-011400 cp ha-011400-m04:/home/docker/cp-test.txt                                                                       | ha-011400 | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:37 UTC | 27 Jan 25 11:37 UTC |
	|         | ha-011400-m03:/home/docker/cp-test_ha-011400-m04_ha-011400-m03.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-011400 ssh -n                                                                                                          | ha-011400 | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:37 UTC | 27 Jan 25 11:37 UTC |
	|         | ha-011400-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-011400 ssh -n ha-011400-m03 sudo cat                                                                                   | ha-011400 | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:37 UTC | 27 Jan 25 11:37 UTC |
	|         | /home/docker/cp-test_ha-011400-m04_ha-011400-m03.txt                                                                      |           |                   |         |                     |                     |
	| node    | ha-011400 node stop m02 -v=7                                                                                              | ha-011400 | minikube6\jenkins | v1.35.0 | 27 Jan 25 11:37 UTC | 27 Jan 25 11:38 UTC |
	|         | --alsologtostderr                                                                                                         |           |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 11:09:07
	Running on machine: minikube6
	Binary: Built with gc go1.23.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 11:09:07.222562    5908 out.go:345] Setting OutFile to fd 1164 ...
	I0127 11:09:07.297679    5908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:09:07.297679    5908 out.go:358] Setting ErrFile to fd 1620...
	I0127 11:09:07.297679    5908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:09:07.319245    5908 out.go:352] Setting JSON to false
	I0127 11:09:07.322311    5908 start.go:129] hostinfo: {"hostname":"minikube6","uptime":439130,"bootTime":1737537016,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5371 Build 19045.5371","kernelVersion":"10.0.19045.5371 Build 19045.5371","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0127 11:09:07.322376    5908 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0127 11:09:07.327670    5908 out.go:177] * [ha-011400] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	I0127 11:09:07.331440    5908 notify.go:220] Checking for updates...
	I0127 11:09:07.333218    5908 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0127 11:09:07.335730    5908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:09:07.339346    5908 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0127 11:09:07.341979    5908 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 11:09:07.344594    5908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:09:07.347542    5908 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 11:09:12.521964    5908 out.go:177] * Using the hyperv driver based on user configuration
	I0127 11:09:12.526114    5908 start.go:297] selected driver: hyperv
	I0127 11:09:12.526114    5908 start.go:901] validating driver "hyperv" against <nil>
	I0127 11:09:12.526114    5908 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 11:09:12.572810    5908 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 11:09:12.573584    5908 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 11:09:12.573584    5908 cni.go:84] Creating CNI manager for ""
	I0127 11:09:12.574403    5908 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0127 11:09:12.574403    5908 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0127 11:09:12.574403    5908 start.go:340] cluster config:
	{Name:ha-011400 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-011400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I0127 11:09:12.575430    5908 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 11:09:12.580573    5908 out.go:177] * Starting "ha-011400" primary control-plane node in "ha-011400" cluster
	I0127 11:09:12.586108    5908 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0127 11:09:12.586108    5908 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
	I0127 11:09:12.586108    5908 cache.go:56] Caching tarball of preloaded images
	I0127 11:09:12.587433    5908 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 11:09:12.587599    5908 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0127 11:09:12.587599    5908 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\config.json ...
	I0127 11:09:12.588396    5908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\config.json: {Name:mk918c8acba483aadee8de079cb12efb4b886e8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:09:12.589617    5908 start.go:360] acquireMachinesLock for ha-011400: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 11:09:12.589617    5908 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-011400"
	I0127 11:09:12.590290    5908 start.go:93] Provisioning new machine with config: &{Name:ha-011400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-011400 Namespace:def
ault APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 11:09:12.590327    5908 start.go:125] createHost starting for "" (driver="hyperv")
	I0127 11:09:12.595937    5908 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0127 11:09:12.595937    5908 start.go:159] libmachine.API.Create for "ha-011400" (driver="hyperv")
	I0127 11:09:12.597037    5908 client.go:168] LocalClient.Create starting
	I0127 11:09:12.597298    5908 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0127 11:09:12.597298    5908 main.go:141] libmachine: Decoding PEM data...
	I0127 11:09:12.597817    5908 main.go:141] libmachine: Parsing certificate...
	I0127 11:09:12.597981    5908 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0127 11:09:12.598188    5908 main.go:141] libmachine: Decoding PEM data...
	I0127 11:09:12.598188    5908 main.go:141] libmachine: Parsing certificate...
	I0127 11:09:12.598363    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0127 11:09:14.533010    5908 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0127 11:09:14.533237    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:14.533237    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0127 11:09:16.159131    5908 main.go:141] libmachine: [stdout =====>] : False
	
	I0127 11:09:16.159537    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:16.159658    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0127 11:09:17.626696    5908 main.go:141] libmachine: [stdout =====>] : True
	
	I0127 11:09:17.626696    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:17.626913    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0127 11:09:21.086973    5908 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0127 11:09:21.087597    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:21.090325    5908 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 11:09:21.600112    5908 main.go:141] libmachine: Creating SSH key...
	I0127 11:09:21.807745    5908 main.go:141] libmachine: Creating VM...
	I0127 11:09:21.808091    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0127 11:09:24.541527    5908 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0127 11:09:24.541587    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:24.541816    5908 main.go:141] libmachine: Using switch "Default Switch"
	I0127 11:09:24.541925    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0127 11:09:26.224571    5908 main.go:141] libmachine: [stdout =====>] : True
	
	I0127 11:09:26.225134    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:26.225134    5908 main.go:141] libmachine: Creating VHD
	I0127 11:09:26.225462    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\fixed.vhd' -SizeBytes 10MB -Fixed
	I0127 11:09:29.898213    5908 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : BCF11593-87A7-490B-BD2B-18E7A6434F9B
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0127 11:09:29.898213    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:29.898815    5908 main.go:141] libmachine: Writing magic tar header
	I0127 11:09:29.898815    5908 main.go:141] libmachine: Writing SSH key tar header
	I0127 11:09:29.912214    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\disk.vhd' -VHDType Dynamic -DeleteSource
	I0127 11:09:33.013760    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:09:33.014714    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:33.014714    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\disk.vhd' -SizeBytes 20000MB
	I0127 11:09:35.461459    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:09:35.461459    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:35.462473    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-011400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0127 11:09:38.867256    5908 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-011400 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0127 11:09:38.867440    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:38.867474    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-011400 -DynamicMemoryEnabled $false
	I0127 11:09:41.003653    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:09:41.003653    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:41.003653    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-011400 -Count 2
	I0127 11:09:43.052228    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:09:43.052228    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:43.052228    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-011400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\boot2docker.iso'
	I0127 11:09:45.485209    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:09:45.485209    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:45.485209    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-011400 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\disk.vhd'
	I0127 11:09:47.961077    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:09:47.961572    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:47.961572    5908 main.go:141] libmachine: Starting VM...
	I0127 11:09:47.961572    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-011400
	I0127 11:09:50.829478    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:09:50.829725    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:50.829725    5908 main.go:141] libmachine: Waiting for host to start...
	I0127 11:09:50.829725    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:09:52.949375    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:09:52.950006    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:52.950006    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:09:55.332754    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:09:55.333360    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:56.333756    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:09:58.463211    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:09:58.463211    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:09:58.463211    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:10:00.985686    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:10:00.985686    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:01.986228    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:10:04.124938    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:10:04.124938    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:04.124938    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:10:06.500200    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:10:06.500200    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:07.500809    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:10:09.595674    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:10:09.595898    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:09.595969    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:10:11.986358    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:10:11.986409    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:12.987239    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:10:15.105667    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:10:15.105667    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:15.106630    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:10:17.598066    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:10:17.598066    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:17.598825    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:10:19.605367    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:10:19.606369    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:19.606399    5908 machine.go:93] provisionDockerMachine start ...
	I0127 11:10:19.606529    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:10:21.613169    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:10:21.613169    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:21.613429    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:10:24.000865    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:10:24.000865    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:24.006563    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:10:24.020134    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.192.249 22 <nil> <nil>}
	I0127 11:10:24.020134    5908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 11:10:24.157564    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 11:10:24.157564    5908 buildroot.go:166] provisioning hostname "ha-011400"
	I0127 11:10:24.157711    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:10:26.143087    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:10:26.143723    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:26.143723    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:10:28.551460    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:10:28.551537    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:28.556127    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:10:28.556864    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.192.249 22 <nil> <nil>}
	I0127 11:10:28.556864    5908 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-011400 && echo "ha-011400" | sudo tee /etc/hostname
	I0127 11:10:28.713391    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-011400
	
	I0127 11:10:28.713391    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:10:30.757638    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:10:30.758661    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:30.758661    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:10:33.184878    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:10:33.185376    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:33.190808    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:10:33.191539    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.192.249 22 <nil> <nil>}
	I0127 11:10:33.191539    5908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-011400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-011400/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-011400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 11:10:33.350859    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:10:33.350966    5908 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0127 11:10:33.351080    5908 buildroot.go:174] setting up certificates
	I0127 11:10:33.351109    5908 provision.go:84] configureAuth start
	I0127 11:10:33.351109    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:10:35.346492    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:10:35.346718    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:35.346825    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:10:37.776994    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:10:37.776994    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:37.777199    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:10:39.814583    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:10:39.814583    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:39.814583    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:10:42.188395    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:10:42.188442    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:42.188442    5908 provision.go:143] copyHostCerts
	I0127 11:10:42.188442    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0127 11:10:42.188962    5908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0127 11:10:42.188962    5908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0127 11:10:42.189335    5908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0127 11:10:42.190630    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0127 11:10:42.190875    5908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0127 11:10:42.190953    5908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0127 11:10:42.191410    5908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0127 11:10:42.192665    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0127 11:10:42.192842    5908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0127 11:10:42.192842    5908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0127 11:10:42.193046    5908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0127 11:10:42.194380    5908 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-011400 san=[127.0.0.1 172.29.192.249 ha-011400 localhost minikube]
	I0127 11:10:42.317687    5908 provision.go:177] copyRemoteCerts
	I0127 11:10:42.326827    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 11:10:42.326827    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:10:44.406944    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:10:44.406944    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:44.406944    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:10:46.794685    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:10:46.794685    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:46.795761    5908 sshutil.go:53] new ssh client: &{IP:172.29.192.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\id_rsa Username:docker}
	I0127 11:10:46.900156    5908 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5732808s)
	I0127 11:10:46.900156    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0127 11:10:46.900156    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 11:10:46.939905    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0127 11:10:46.940388    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0127 11:10:46.980857    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0127 11:10:46.981981    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 11:10:47.030423    5908 provision.go:87] duration metric: took 13.6791718s to configureAuth
	I0127 11:10:47.030423    5908 buildroot.go:189] setting minikube options for container-runtime
	I0127 11:10:47.031620    5908 config.go:182] Loaded profile config "ha-011400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 11:10:47.031620    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:10:49.098432    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:10:49.098432    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:49.099137    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:10:51.537040    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:10:51.537870    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:51.543074    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:10:51.543734    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.192.249 22 <nil> <nil>}
	I0127 11:10:51.543734    5908 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0127 11:10:51.684554    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0127 11:10:51.684658    5908 buildroot.go:70] root file system type: tmpfs
	I0127 11:10:51.684885    5908 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0127 11:10:51.684973    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:10:53.683836    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:10:53.683836    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:53.683929    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:10:56.063779    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:10:56.064517    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:56.070083    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:10:56.070871    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.192.249 22 <nil> <nil>}
	I0127 11:10:56.070871    5908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0127 11:10:56.238896    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0127 11:10:56.238964    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:10:58.225110    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:10:58.225305    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:10:58.225602    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:11:00.693976    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:11:00.694043    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:00.697892    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:11:00.699277    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.192.249 22 <nil> <nil>}
	I0127 11:11:00.699277    5908 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0127 11:11:02.953410    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0127 11:11:02.953410    5908 machine.go:96] duration metric: took 43.3465606s to provisionDockerMachine
	I0127 11:11:02.953410    5908 client.go:171] duration metric: took 1m50.3552256s to LocalClient.Create
	I0127 11:11:02.953410    5908 start.go:167] duration metric: took 1m50.356326s to libmachine.API.Create "ha-011400"
	I0127 11:11:02.953410    5908 start.go:293] postStartSetup for "ha-011400" (driver="hyperv")
	I0127 11:11:02.953410    5908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:11:02.965727    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:11:02.965727    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:11:05.169331    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:11:05.169331    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:05.169331    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:11:07.562887    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:11:07.563268    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:07.563817    5908 sshutil.go:53] new ssh client: &{IP:172.29.192.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\id_rsa Username:docker}
	I0127 11:11:07.668017    5908 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7022405s)
	I0127 11:11:07.684358    5908 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:11:07.689985    5908 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 11:11:07.689985    5908 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0127 11:11:07.689985    5908 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0127 11:11:07.691847    5908 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> 59562.pem in /etc/ssl/certs
	I0127 11:11:07.691957    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> /etc/ssl/certs/59562.pem
	I0127 11:11:07.702578    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 11:11:07.719676    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem --> /etc/ssl/certs/59562.pem (1708 bytes)
	I0127 11:11:07.761240    5908 start.go:296] duration metric: took 4.8077797s for postStartSetup
	I0127 11:11:07.764102    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:11:09.759814    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:11:09.759860    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:09.759928    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:11:12.132766    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:11:12.133212    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:12.133438    5908 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\config.json ...
	I0127 11:11:12.137229    5908 start.go:128] duration metric: took 1m59.5455659s to createHost
	I0127 11:11:12.137355    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:11:14.139463    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:11:14.140224    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:14.140224    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:11:16.561413    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:11:16.561648    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:16.566654    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:11:16.567175    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.192.249 22 <nil> <nil>}
	I0127 11:11:16.567434    5908 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 11:11:16.695667    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737976276.707150979
	
	I0127 11:11:16.695667    5908 fix.go:216] guest clock: 1737976276.707150979
	I0127 11:11:16.695667    5908 fix.go:229] Guest: 2025-01-27 11:11:16.707150979 +0000 UTC Remote: 2025-01-27 11:11:12.1372298 +0000 UTC m=+124.999711201 (delta=4.569921179s)
	I0127 11:11:16.695879    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:11:18.772791    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:11:18.773401    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:18.773401    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:11:21.223695    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:11:21.224339    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:21.229544    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:11:21.230235    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.192.249 22 <nil> <nil>}
	I0127 11:11:21.230235    5908 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1737976276
	I0127 11:11:21.382762    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 27 11:11:16 UTC 2025
	
	I0127 11:11:21.382848    5908 fix.go:236] clock set: Mon Jan 27 11:11:16 UTC 2025
	 (err=<nil>)
	I0127 11:11:21.382848    5908 start.go:83] releasing machines lock for "ha-011400", held for 2m8.7913742s
	I0127 11:11:21.383022    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:11:23.389730    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:11:23.390549    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:23.390636    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:11:25.787606    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:11:25.787655    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:25.791275    5908 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0127 11:11:25.791275    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:11:25.804128    5908 ssh_runner.go:195] Run: cat /version.json
	I0127 11:11:25.804793    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:11:27.990178    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:11:27.990975    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:27.991051    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:11:28.027708    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:11:28.027708    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:28.027875    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:11:30.588051    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:11:30.588107    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:30.588107    5908 sshutil.go:53] new ssh client: &{IP:172.29.192.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\id_rsa Username:docker}
	I0127 11:11:30.610750    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:11:30.610750    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:11:30.611355    5908 sshutil.go:53] new ssh client: &{IP:172.29.192.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\id_rsa Username:docker}
	I0127 11:11:30.690624    5908 ssh_runner.go:235] Completed: cat /version.json: (4.886446s)
	I0127 11:11:30.702245    5908 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9109194s)
	W0127 11:11:30.702245    5908 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0127 11:11:30.702938    5908 ssh_runner.go:195] Run: systemctl --version
	I0127 11:11:30.721424    5908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0127 11:11:30.729429    5908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 11:11:30.739793    5908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:11:30.766360    5908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 11:11:30.766360    5908 start.go:495] detecting cgroup driver to use...
	I0127 11:11:30.766742    5908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:11:30.811395    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 11:11:30.842288    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0127 11:11:30.848317    5908 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0127 11:11:30.848454    5908 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0127 11:11:30.864266    5908 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 11:11:30.876184    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 11:11:30.905910    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 11:11:30.933956    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 11:11:30.960270    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 11:11:30.988350    5908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:11:31.019360    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 11:11:31.053690    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 11:11:31.088425    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 11:11:31.116823    5908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:11:31.133902    5908 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 11:11:31.145998    5908 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 11:11:31.181685    5908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:11:31.213595    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:11:31.426063    5908 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 11:11:31.459596    5908 start.go:495] detecting cgroup driver to use...
	I0127 11:11:31.470026    5908 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0127 11:11:31.501795    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:11:31.534125    5908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 11:11:31.576708    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:11:31.608353    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 11:11:31.640956    5908 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 11:11:31.707701    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 11:11:31.728970    5908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:11:31.771490    5908 ssh_runner.go:195] Run: which cri-dockerd
	I0127 11:11:31.792650    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0127 11:11:31.810007    5908 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0127 11:11:31.852912    5908 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0127 11:11:32.051359    5908 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0127 11:11:32.234576    5908 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0127 11:11:32.234897    5908 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0127 11:11:32.279287    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:11:32.482928    5908 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0127 11:11:35.078528    5908 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5954283s)
	I0127 11:11:35.089502    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0127 11:11:35.125404    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0127 11:11:35.159439    5908 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0127 11:11:35.364144    5908 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0127 11:11:35.564596    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:11:35.748802    5908 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0127 11:11:35.786793    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0127 11:11:35.816974    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:11:36.006247    5908 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0127 11:11:36.099786    5908 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0127 11:11:36.111632    5908 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0127 11:11:36.119620    5908 start.go:563] Will wait 60s for crictl version
	I0127 11:11:36.129364    5908 ssh_runner.go:195] Run: which crictl
	I0127 11:11:36.145183    5908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:11:36.196286    5908 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0127 11:11:36.205610    5908 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 11:11:36.248769    5908 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 11:11:36.299988    5908 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0127 11:11:36.299988    5908 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0127 11:11:36.303987    5908 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0127 11:11:36.303987    5908 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0127 11:11:36.303987    5908 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0127 11:11:36.303987    5908 ip.go:211] Found interface: {Index:17 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:43:05:a6 Flags:up|broadcast|multicast|running}
	I0127 11:11:36.307032    5908 ip.go:214] interface addr: fe80::8ceb:a58b:811a:7c79/64
	I0127 11:11:36.307032    5908 ip.go:214] interface addr: 172.29.192.1/20
	I0127 11:11:36.316048    5908 ssh_runner.go:195] Run: grep 172.29.192.1	host.minikube.internal$ /etc/hosts
	I0127 11:11:36.323051    5908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:11:36.354737    5908 kubeadm.go:883] updating cluster {Name:ha-011400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-011400 Namespace:default APIServerHAVIP
:172.29.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.192.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 11:11:36.355727    5908 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0127 11:11:36.362824    5908 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0127 11:11:36.386228    5908 docker.go:689] Got preloaded images: 
	I0127 11:11:36.386228    5908 docker.go:695] registry.k8s.io/kube-apiserver:v1.32.1 wasn't preloaded
	I0127 11:11:36.396856    5908 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0127 11:11:36.424471    5908 ssh_runner.go:195] Run: which lz4
	I0127 11:11:36.429988    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0127 11:11:36.440927    5908 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 11:11:36.446457    5908 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 11:11:36.446736    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (349810983 bytes)
	I0127 11:11:38.441566    5908 docker.go:653] duration metric: took 2.0112085s to copy over tarball
	I0127 11:11:38.454490    5908 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 11:11:46.790497    5908 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.3359203s)
	I0127 11:11:46.790497    5908 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 11:11:46.848768    5908 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0127 11:11:46.867274    5908 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0127 11:11:46.909862    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:11:47.100078    5908 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0127 11:11:50.401617    5908 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3013415s)
	I0127 11:11:50.410397    5908 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0127 11:11:50.435484    5908 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0127 11:11:50.435647    5908 cache_images.go:84] Images are preloaded, skipping loading
	I0127 11:11:50.435647    5908 kubeadm.go:934] updating node { 172.29.192.249 8443 v1.32.1 docker true true} ...
	I0127 11:11:50.435969    5908 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-011400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.192.249
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:ha-011400 Namespace:default APIServerHAVIP:172.29.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 11:11:50.444978    5908 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0127 11:11:50.505231    5908 cni.go:84] Creating CNI manager for ""
	I0127 11:11:50.505280    5908 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0127 11:11:50.505280    5908 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 11:11:50.505335    5908 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.29.192.249 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-011400 NodeName:ha-011400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.29.192.249"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.29.192.249 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 11:11:50.505375    5908 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.29.192.249
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-011400"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.29.192.249"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.29.192.249"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 11:11:50.505375    5908 kube-vip.go:115] generating kube-vip config ...
	I0127 11:11:50.516038    5908 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0127 11:11:50.542312    5908 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0127 11:11:50.542450    5908 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.29.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.9
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0127 11:11:50.552753    5908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 11:11:50.567451    5908 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 11:11:50.579484    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0127 11:11:50.596884    5908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0127 11:11:50.627481    5908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:11:50.656126    5908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I0127 11:11:50.685796    5908 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0127 11:11:50.724528    5908 ssh_runner.go:195] Run: grep 172.29.207.254	control-plane.minikube.internal$ /etc/hosts
	I0127 11:11:50.730535    5908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:11:50.761623    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:11:50.939221    5908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:11:50.965161    5908 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400 for IP: 172.29.192.249
	I0127 11:11:50.965161    5908 certs.go:194] generating shared ca certs ...
	I0127 11:11:50.965308    5908 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:11:50.965963    5908 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0127 11:11:50.966485    5908 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0127 11:11:50.966731    5908 certs.go:256] generating profile certs ...
	I0127 11:11:50.967351    5908 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\client.key
	I0127 11:11:50.967351    5908 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\client.crt with IP's: []
	I0127 11:11:51.134209    5908 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\client.crt ...
	I0127 11:11:51.134209    5908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\client.crt: {Name:mkba84c6952d76a5735a9db83ce4c4badf7ffeb8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:11:51.135583    5908 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\client.key ...
	I0127 11:11:51.135583    5908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\client.key: {Name:mke75589f2e06ab48fc67ae6f019dea0ee774b04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:11:51.137017    5908 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key.f8025a70
	I0127 11:11:51.137017    5908 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt.f8025a70 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.29.192.249 172.29.207.254]
	I0127 11:11:51.201513    5908 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt.f8025a70 ...
	I0127 11:11:51.201513    5908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt.f8025a70: {Name:mkb4d8925a0047dcb0da4f5c22cc0bf9458620c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:11:51.202610    5908 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key.f8025a70 ...
	I0127 11:11:51.202610    5908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key.f8025a70: {Name:mk5499447aca49b42a042f12c2ffd4a4e3eee915 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:11:51.203623    5908 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt.f8025a70 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt
	I0127 11:11:51.218217    5908 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key.f8025a70 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key
	I0127 11:11:51.220322    5908 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.key
	I0127 11:11:51.220566    5908 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.crt with IP's: []
	I0127 11:11:51.412816    5908 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.crt ...
	I0127 11:11:51.412816    5908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.crt: {Name:mkf67e1f2becfa1a0326341caca64d6a4aa03284 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:11:51.415060    5908 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.key ...
	I0127 11:11:51.415060    5908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.key: {Name:mk5bf10f49157fce23a6fa1649fd2e473d0f78e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:11:51.415930    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0127 11:11:51.416532    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0127 11:11:51.416728    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0127 11:11:51.416896    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0127 11:11:51.416896    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0127 11:11:51.416896    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0127 11:11:51.416896    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0127 11:11:51.429175    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0127 11:11:51.430375    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem (1338 bytes)
	W0127 11:11:51.431164    5908 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956_empty.pem, impossibly tiny 0 bytes
	I0127 11:11:51.431164    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0127 11:11:51.431498    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0127 11:11:51.431905    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0127 11:11:51.431905    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0127 11:11:51.432650    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem (1708 bytes)
	I0127 11:11:51.433001    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem -> /usr/share/ca-certificates/5956.pem
	I0127 11:11:51.433180    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> /usr/share/ca-certificates/59562.pem
	I0127 11:11:51.433343    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:11:51.433500    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:11:51.476702    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 11:11:51.519341    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:11:51.561983    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 11:11:51.610569    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 11:11:51.657176    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 11:11:51.706019    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:11:51.747867    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 11:11:51.789263    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem --> /usr/share/ca-certificates/5956.pem (1338 bytes)
	I0127 11:11:51.829637    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem --> /usr/share/ca-certificates/59562.pem (1708 bytes)
	I0127 11:11:51.870789    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:11:51.913415    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 11:11:51.959158    5908 ssh_runner.go:195] Run: openssl version
	I0127 11:11:51.978863    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:11:52.007768    5908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:11:52.015516    5908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:11:52.025877    5908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:11:52.045885    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:11:52.073834    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5956.pem && ln -fs /usr/share/ca-certificates/5956.pem /etc/ssl/certs/5956.pem"
	I0127 11:11:52.101764    5908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5956.pem
	I0127 11:11:52.108405    5908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:52 /usr/share/ca-certificates/5956.pem
	I0127 11:11:52.118376    5908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5956.pem
	I0127 11:11:52.136914    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5956.pem /etc/ssl/certs/51391683.0"
	I0127 11:11:52.166813    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59562.pem && ln -fs /usr/share/ca-certificates/59562.pem /etc/ssl/certs/59562.pem"
	I0127 11:11:52.195868    5908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59562.pem
	I0127 11:11:52.202178    5908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:52 /usr/share/ca-certificates/59562.pem
	I0127 11:11:52.212975    5908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59562.pem
	I0127 11:11:52.231370    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59562.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:11:52.260852    5908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:11:52.268071    5908 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 11:11:52.268405    5908 kubeadm.go:392] StartCluster: {Name:ha-011400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-011400 Namespace:default APIServerHAVIP:17
2.29.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.192.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:11:52.276725    5908 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0127 11:11:52.310633    5908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 11:11:52.338469    5908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 11:11:52.363347    5908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 11:11:52.379311    5908 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 11:11:52.379311    5908 kubeadm.go:157] found existing configuration files:
	
	I0127 11:11:52.389376    5908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 11:11:52.411728    5908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 11:11:52.426114    5908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 11:11:52.457965    5908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 11:11:52.478921    5908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 11:11:52.490610    5908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 11:11:52.520032    5908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 11:11:52.541817    5908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 11:11:52.553672    5908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 11:11:52.583006    5908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 11:11:52.601375    5908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 11:11:52.614820    5908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 11:11:52.638562    5908 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 11:11:52.879181    5908 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 11:11:52.879392    5908 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 11:11:53.031198    5908 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 11:11:53.031539    5908 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 11:11:53.031539    5908 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 11:11:53.053180    5908 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 11:11:53.056434    5908 out.go:235]   - Generating certificates and keys ...
	I0127 11:11:53.056695    5908 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 11:11:53.056695    5908 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 11:11:53.272424    5908 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 11:11:53.670890    5908 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 11:11:53.819417    5908 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 11:11:53.998510    5908 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 11:11:54.249756    5908 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 11:11:54.250161    5908 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-011400 localhost] and IPs [172.29.192.249 127.0.0.1 ::1]
	I0127 11:11:54.306093    5908 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 11:11:54.306468    5908 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-011400 localhost] and IPs [172.29.192.249 127.0.0.1 ::1]
	I0127 11:11:54.553398    5908 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 11:11:55.127728    5908 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 11:11:55.547148    5908 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 11:11:55.549135    5908 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 11:11:55.816743    5908 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 11:11:55.970494    5908 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 11:11:56.103571    5908 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 11:11:56.670631    5908 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 11:11:57.002691    5908 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 11:11:57.003985    5908 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 11:11:57.007311    5908 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 11:11:57.013123    5908 out.go:235]   - Booting up control plane ...
	I0127 11:11:57.013426    5908 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 11:11:57.013606    5908 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 11:11:57.013776    5908 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 11:11:57.038786    5908 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 11:11:57.047306    5908 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 11:11:57.047439    5908 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 11:11:57.252031    5908 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 11:11:57.252466    5908 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 11:11:58.253645    5908 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002154367s
	I0127 11:11:58.253711    5908 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 11:12:07.188471    5908 kubeadm.go:310] [api-check] The API server is healthy after 8.93489925s
	I0127 11:12:07.209946    5908 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 11:12:07.239557    5908 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 11:12:07.280863    5908 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 11:12:07.280863    5908 kubeadm.go:310] [mark-control-plane] Marking the node ha-011400 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 11:12:07.297437    5908 kubeadm.go:310] [bootstrap-token] Using token: 7oks3g.btlejrxbw13gzxd7
	I0127 11:12:07.300662    5908 out.go:235]   - Configuring RBAC rules ...
	I0127 11:12:07.301245    5908 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 11:12:07.311249    5908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 11:12:07.332069    5908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 11:12:07.342128    5908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 11:12:07.353406    5908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 11:12:07.366456    5908 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 11:12:07.600176    5908 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 11:12:08.076410    5908 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 11:12:08.602413    5908 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 11:12:08.603774    5908 kubeadm.go:310] 
	I0127 11:12:08.604639    5908 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 11:12:08.604728    5908 kubeadm.go:310] 
	I0127 11:12:08.605066    5908 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 11:12:08.605066    5908 kubeadm.go:310] 
	I0127 11:12:08.605160    5908 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 11:12:08.605378    5908 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 11:12:08.605530    5908 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 11:12:08.605591    5908 kubeadm.go:310] 
	I0127 11:12:08.605697    5908 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 11:12:08.605697    5908 kubeadm.go:310] 
	I0127 11:12:08.605697    5908 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 11:12:08.605697    5908 kubeadm.go:310] 
	I0127 11:12:08.605697    5908 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 11:12:08.606328    5908 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 11:12:08.606696    5908 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 11:12:08.606743    5908 kubeadm.go:310] 
	I0127 11:12:08.606998    5908 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 11:12:08.606998    5908 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 11:12:08.606998    5908 kubeadm.go:310] 
	I0127 11:12:08.606998    5908 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7oks3g.btlejrxbw13gzxd7 \
	I0127 11:12:08.607669    5908 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7b53ba02e26824ededdf08178373023b65bda8005ddad46edfe91cb6a3cb8d3f \
	I0127 11:12:08.607787    5908 kubeadm.go:310] 	--control-plane 
	I0127 11:12:08.607787    5908 kubeadm.go:310] 
	I0127 11:12:08.608014    5908 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 11:12:08.608014    5908 kubeadm.go:310] 
	I0127 11:12:08.608014    5908 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7oks3g.btlejrxbw13gzxd7 \
	I0127 11:12:08.608605    5908 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7b53ba02e26824ededdf08178373023b65bda8005ddad46edfe91cb6a3cb8d3f 
	I0127 11:12:08.610642    5908 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 11:12:08.610701    5908 cni.go:84] Creating CNI manager for ""
	I0127 11:12:08.610766    5908 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0127 11:12:08.615658    5908 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0127 11:12:08.628648    5908 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0127 11:12:08.636828    5908 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0127 11:12:08.636971    5908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0127 11:12:08.678902    5908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0127 11:12:09.351264    5908 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 11:12:09.363240    5908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-011400 minikube.k8s.io/updated_at=2025_01_27T11_12_09_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650 minikube.k8s.io/name=ha-011400 minikube.k8s.io/primary=true
	I0127 11:12:09.364238    5908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:12:09.375371    5908 ops.go:34] apiserver oom_adj: -16
	I0127 11:12:09.602364    5908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:12:10.101790    5908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:12:10.604489    5908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:12:11.104480    5908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 11:12:11.229565    5908 kubeadm.go:1113] duration metric: took 1.8782808s to wait for elevateKubeSystemPrivileges
	I0127 11:12:11.229565    5908 kubeadm.go:394] duration metric: took 18.9609621s to StartCluster
	I0127 11:12:11.229565    5908 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:12:11.229565    5908 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0127 11:12:11.233122    5908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:12:11.235479    5908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 11:12:11.235479    5908 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.29.192.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 11:12:11.235479    5908 start.go:241] waiting for startup goroutines ...
	I0127 11:12:11.235479    5908 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 11:12:11.235790    5908 addons.go:69] Setting storage-provisioner=true in profile "ha-011400"
	I0127 11:12:11.235790    5908 addons.go:69] Setting default-storageclass=true in profile "ha-011400"
	I0127 11:12:11.235844    5908 addons.go:238] Setting addon storage-provisioner=true in "ha-011400"
	I0127 11:12:11.235844    5908 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-011400"
	I0127 11:12:11.236153    5908 host.go:66] Checking if "ha-011400" exists ...
	I0127 11:12:11.236153    5908 config.go:182] Loaded profile config "ha-011400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 11:12:11.236903    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:12:11.237572    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:12:11.379803    5908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.29.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 11:12:11.784071    5908 start.go:971] {"host.minikube.internal": 172.29.192.1} host record injected into CoreDNS's ConfigMap
	I0127 11:12:13.523490    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:12:13.523490    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:13.523810    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:12:13.524293    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:13.524824    5908 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0127 11:12:13.525707    5908 kapi.go:59] client config for ha-011400: &rest.Config{Host:"https://172.29.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-011400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-011400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x301e420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 11:12:13.527326    5908 cert_rotation.go:140] Starting client certificate rotation controller
	I0127 11:12:13.527682    5908 addons.go:238] Setting addon default-storageclass=true in "ha-011400"
	I0127 11:12:13.527829    5908 host.go:66] Checking if "ha-011400" exists ...
	I0127 11:12:13.528743    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:12:13.529451    5908 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 11:12:13.532832    5908 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:12:13.532832    5908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 11:12:13.532832    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:12:15.837462    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:12:15.837522    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:15.837582    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:12:15.972999    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:12:15.973963    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:15.974023    5908 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 11:12:15.974113    5908 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 11:12:15.974208    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:12:18.269334    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:12:18.269334    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:18.269334    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:12:18.558604    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:12:18.558604    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:18.559608    5908 sshutil.go:53] new ssh client: &{IP:172.29.192.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\id_rsa Username:docker}
	I0127 11:12:18.723181    5908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 11:12:20.716378    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:12:20.716464    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:20.716588    5908 sshutil.go:53] new ssh client: &{IP:172.29.192.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\id_rsa Username:docker}
	I0127 11:12:20.845054    5908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 11:12:20.998905    5908 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0127 11:12:20.998979    5908 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0127 11:12:20.998979    5908 round_trippers.go:463] GET https://172.29.207.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0127 11:12:20.998979    5908 round_trippers.go:469] Request Headers:
	I0127 11:12:20.998979    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:12:20.998979    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:12:21.012855    5908 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0127 11:12:21.013714    5908 round_trippers.go:463] PUT https://172.29.207.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0127 11:12:21.013714    5908 round_trippers.go:469] Request Headers:
	I0127 11:12:21.013714    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:12:21.013714    5908 round_trippers.go:473]     Content-Type: application/json
	I0127 11:12:21.013714    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:12:21.018345    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:12:21.021152    5908 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 11:12:21.025196    5908 addons.go:514] duration metric: took 9.7896147s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0127 11:12:21.025196    5908 start.go:246] waiting for cluster config update ...
	I0127 11:12:21.025196    5908 start.go:255] writing updated cluster config ...
	I0127 11:12:21.029600    5908 out.go:201] 
	I0127 11:12:21.049748    5908 config.go:182] Loaded profile config "ha-011400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 11:12:21.049847    5908 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\config.json ...
	I0127 11:12:21.059069    5908 out.go:177] * Starting "ha-011400-m02" control-plane node in "ha-011400" cluster
	I0127 11:12:21.061168    5908 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0127 11:12:21.061751    5908 cache.go:56] Caching tarball of preloaded images
	I0127 11:12:21.061970    5908 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 11:12:21.062457    5908 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0127 11:12:21.062498    5908 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\config.json ...
	I0127 11:12:21.068754    5908 start.go:360] acquireMachinesLock for ha-011400-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 11:12:21.068754    5908 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-011400-m02"
	I0127 11:12:21.069404    5908 start.go:93] Provisioning new machine with config: &{Name:ha-011400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-011400 Namespace:def
ault APIServerHAVIP:172.29.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.192.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 11:12:21.069404    5908 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0127 11:12:21.072587    5908 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0127 11:12:21.073487    5908 start.go:159] libmachine.API.Create for "ha-011400" (driver="hyperv")
	I0127 11:12:21.073487    5908 client.go:168] LocalClient.Create starting
	I0127 11:12:21.073897    5908 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0127 11:12:21.073897    5908 main.go:141] libmachine: Decoding PEM data...
	I0127 11:12:21.074374    5908 main.go:141] libmachine: Parsing certificate...
	I0127 11:12:21.074536    5908 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0127 11:12:21.074779    5908 main.go:141] libmachine: Decoding PEM data...
	I0127 11:12:21.074779    5908 main.go:141] libmachine: Parsing certificate...
	I0127 11:12:21.074779    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0127 11:12:22.883268    5908 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0127 11:12:22.883268    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:22.883533    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0127 11:12:24.543408    5908 main.go:141] libmachine: [stdout =====>] : False
	
	I0127 11:12:24.543632    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:24.543632    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0127 11:12:25.993825    5908 main.go:141] libmachine: [stdout =====>] : True
	
	I0127 11:12:25.993825    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:25.994193    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0127 11:12:29.578002    5908 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0127 11:12:29.578002    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:29.580769    5908 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 11:12:30.099725    5908 main.go:141] libmachine: Creating SSH key...
	I0127 11:12:30.247062    5908 main.go:141] libmachine: Creating VM...
	I0127 11:12:30.247062    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0127 11:12:33.149678    5908 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0127 11:12:33.149785    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:33.149846    5908 main.go:141] libmachine: Using switch "Default Switch"
	I0127 11:12:33.149915    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0127 11:12:34.872870    5908 main.go:141] libmachine: [stdout =====>] : True
	
	I0127 11:12:34.873031    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:34.873031    5908 main.go:141] libmachine: Creating VHD
	I0127 11:12:34.873130    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0127 11:12:38.547633    5908 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 53CB5B06-04D8-4770-9AFB-1386F250ED69
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0127 11:12:38.547633    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:38.548220    5908 main.go:141] libmachine: Writing magic tar header
	I0127 11:12:38.548220    5908 main.go:141] libmachine: Writing SSH key tar header
	I0127 11:12:38.561142    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0127 11:12:41.653001    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:12:41.653331    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:41.653387    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m02\disk.vhd' -SizeBytes 20000MB
	I0127 11:12:44.129887    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:12:44.130662    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:44.130662    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-011400-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0127 11:12:47.648667    5908 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-011400-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0127 11:12:47.649483    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:47.649483    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-011400-m02 -DynamicMemoryEnabled $false
	I0127 11:12:49.801045    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:12:49.801045    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:49.801128    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-011400-m02 -Count 2
	I0127 11:12:51.939264    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:12:51.939264    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:51.940263    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-011400-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m02\boot2docker.iso'
	I0127 11:12:54.399753    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:12:54.400441    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:54.400533    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-011400-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m02\disk.vhd'
	I0127 11:12:56.975703    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:12:56.976530    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:56.976530    5908 main.go:141] libmachine: Starting VM...
	I0127 11:12:56.976530    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-011400-m02
	I0127 11:12:59.988152    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:12:59.988495    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:12:59.988495    5908 main.go:141] libmachine: Waiting for host to start...
	I0127 11:12:59.988495    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:13:02.211357    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:13:02.211357    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:02.211357    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:13:04.696487    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:13:04.696566    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:05.697408    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:13:07.895002    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:13:07.895002    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:07.895002    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:13:10.370120    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:13:10.370120    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:11.371452    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:13:13.530152    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:13:13.530152    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:13.530152    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:13:16.007157    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:13:16.007157    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:17.007910    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:13:19.177236    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:13:19.177301    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:19.177370    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:13:21.661297    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:13:21.661297    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:22.663087    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:13:24.875965    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:13:24.875965    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:24.876103    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:13:27.481388    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:13:27.481578    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:27.481659    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:13:29.530717    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:13:29.531275    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:29.531275    5908 machine.go:93] provisionDockerMachine start ...
	I0127 11:13:29.531375    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:13:31.633931    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:13:31.633931    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:31.633931    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:13:34.176933    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:13:34.176983    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:34.182221    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:13:34.198242    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.195.173 22 <nil> <nil>}
	I0127 11:13:34.198339    5908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 11:13:34.328627    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 11:13:34.328735    5908 buildroot.go:166] provisioning hostname "ha-011400-m02"
	I0127 11:13:34.328735    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:13:36.376031    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:13:36.376031    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:36.376143    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:13:38.805787    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:13:38.805787    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:38.812607    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:13:38.813341    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.195.173 22 <nil> <nil>}
	I0127 11:13:38.813341    5908 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-011400-m02 && echo "ha-011400-m02" | sudo tee /etc/hostname
	I0127 11:13:38.968354    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-011400-m02
	
	I0127 11:13:38.968456    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:13:41.001977    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:13:41.002840    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:41.002840    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:13:43.452505    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:13:43.452505    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:43.457496    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:13:43.458191    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.195.173 22 <nil> <nil>}
	I0127 11:13:43.458191    5908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-011400-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-011400-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-011400-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 11:13:43.596906    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:13:43.596906    5908 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0127 11:13:43.596906    5908 buildroot.go:174] setting up certificates
	I0127 11:13:43.596906    5908 provision.go:84] configureAuth start
	I0127 11:13:43.596906    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:13:45.703466    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:13:45.704485    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:45.704534    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:13:48.179855    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:13:48.180196    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:48.180297    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:13:50.301544    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:13:50.301544    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:50.301544    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:13:52.744076    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:13:52.744383    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:52.744445    5908 provision.go:143] copyHostCerts
	I0127 11:13:52.744445    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0127 11:13:52.744445    5908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0127 11:13:52.744445    5908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0127 11:13:52.745229    5908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0127 11:13:52.746610    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0127 11:13:52.746761    5908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0127 11:13:52.746761    5908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0127 11:13:52.747290    5908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0127 11:13:52.748035    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0127 11:13:52.748571    5908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0127 11:13:52.748657    5908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0127 11:13:52.748985    5908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0127 11:13:52.750012    5908 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-011400-m02 san=[127.0.0.1 172.29.195.173 ha-011400-m02 localhost minikube]
	I0127 11:13:53.033268    5908 provision.go:177] copyRemoteCerts
	I0127 11:13:53.044263    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 11:13:53.044263    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:13:55.090856    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:13:55.090856    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:55.091434    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:13:57.585152    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:13:57.585152    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:57.586557    5908 sshutil.go:53] new ssh client: &{IP:172.29.195.173 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m02\id_rsa Username:docker}
	I0127 11:13:57.688637    5908 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6443256s)
	I0127 11:13:57.688739    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0127 11:13:57.689389    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 11:13:57.735155    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0127 11:13:57.735155    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 11:13:57.779501    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0127 11:13:57.779501    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 11:13:57.826836    5908 provision.go:87] duration metric: took 14.2297823s to configureAuth
	I0127 11:13:57.826836    5908 buildroot.go:189] setting minikube options for container-runtime
	I0127 11:13:57.827429    5908 config.go:182] Loaded profile config "ha-011400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 11:13:57.827429    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:13:59.950211    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:13:59.950211    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:13:59.950382    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:14:02.461510    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:14:02.461510    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:02.466602    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:14:02.467146    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.195.173 22 <nil> <nil>}
	I0127 11:14:02.467146    5908 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0127 11:14:02.586009    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0127 11:14:02.586009    5908 buildroot.go:70] root file system type: tmpfs
	I0127 11:14:02.586546    5908 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0127 11:14:02.586683    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:14:04.671708    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:14:04.671989    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:04.671989    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:14:07.175168    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:14:07.175168    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:07.181011    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:14:07.181011    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.195.173 22 <nil> <nil>}
	I0127 11:14:07.181616    5908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.29.192.249"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0127 11:14:07.333037    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.29.192.249
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0127 11:14:07.333116    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:14:09.413564    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:14:09.414273    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:09.414434    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:14:11.908031    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:14:11.908031    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:11.913216    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:14:11.913906    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.195.173 22 <nil> <nil>}
	I0127 11:14:11.913906    5908 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0127 11:14:14.152693    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0127 11:14:14.152778    5908 machine.go:96] duration metric: took 44.6210397s to provisionDockerMachine
	I0127 11:14:14.152778    5908 client.go:171] duration metric: took 1m53.0781151s to LocalClient.Create
	I0127 11:14:14.152935    5908 start.go:167] duration metric: took 1m53.0782716s to libmachine.API.Create "ha-011400"
	I0127 11:14:14.152935    5908 start.go:293] postStartSetup for "ha-011400-m02" (driver="hyperv")
	I0127 11:14:14.152935    5908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:14:14.163380    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:14:14.163380    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:14:16.308789    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:14:16.308886    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:16.308979    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:14:18.790253    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:14:18.791258    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:18.791258    5908 sshutil.go:53] new ssh client: &{IP:172.29.195.173 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m02\id_rsa Username:docker}
	I0127 11:14:18.896469    5908 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.733039s)
	I0127 11:14:18.907688    5908 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:14:18.914326    5908 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 11:14:18.914326    5908 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0127 11:14:18.914326    5908 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0127 11:14:18.915588    5908 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> 59562.pem in /etc/ssl/certs
	I0127 11:14:18.915588    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> /etc/ssl/certs/59562.pem
	I0127 11:14:18.925643    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 11:14:18.942978    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem --> /etc/ssl/certs/59562.pem (1708 bytes)
	I0127 11:14:18.984503    5908 start.go:296] duration metric: took 4.8315174s for postStartSetup
	I0127 11:14:18.987055    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:14:21.127908    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:14:21.127908    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:21.127908    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:14:23.541931    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:14:23.543021    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:23.543021    5908 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\config.json ...
	I0127 11:14:23.545720    5908 start.go:128] duration metric: took 2m2.4750415s to createHost
	I0127 11:14:23.545720    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:14:25.652946    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:14:25.653114    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:25.653221    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:14:28.138298    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:14:28.138298    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:28.144154    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:14:28.144777    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.195.173 22 <nil> <nil>}
	I0127 11:14:28.144777    5908 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 11:14:28.265300    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737976468.277162667
	
	I0127 11:14:28.265405    5908 fix.go:216] guest clock: 1737976468.277162667
	I0127 11:14:28.265405    5908 fix.go:229] Guest: 2025-01-27 11:14:28.277162667 +0000 UTC Remote: 2025-01-27 11:14:23.54572 +0000 UTC m=+316.406210801 (delta=4.731442667s)
	I0127 11:14:28.265405    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:14:30.354013    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:14:30.354286    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:30.354286    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:14:32.807954    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:14:32.808172    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:32.813425    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:14:32.814269    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.195.173 22 <nil> <nil>}
	I0127 11:14:32.814269    5908 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1737976468
	I0127 11:14:32.958704    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 27 11:14:28 UTC 2025
	
	I0127 11:14:32.958704    5908 fix.go:236] clock set: Mon Jan 27 11:14:28 UTC 2025
	 (err=<nil>)
	I0127 11:14:32.958704    5908 start.go:83] releasing machines lock for "ha-011400-m02", held for 2m11.8880646s
	I0127 11:14:32.958890    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:14:35.026686    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:14:35.026686    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:35.027003    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:14:37.469776    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:14:37.470427    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:37.473510    5908 out.go:177] * Found network options:
	I0127 11:14:37.476239    5908 out.go:177]   - NO_PROXY=172.29.192.249
	W0127 11:14:37.478437    5908 proxy.go:119] fail to check proxy env: Error ip not in block
	I0127 11:14:37.481021    5908 out.go:177]   - NO_PROXY=172.29.192.249
	W0127 11:14:37.484053    5908 proxy.go:119] fail to check proxy env: Error ip not in block
	W0127 11:14:37.484053    5908 proxy.go:119] fail to check proxy env: Error ip not in block
	I0127 11:14:37.487400    5908 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0127 11:14:37.487400    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:14:37.496455    5908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0127 11:14:37.496455    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:14:39.684042    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:14:39.684820    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:39.684820    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:14:39.699889    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:14:39.699889    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:39.700520    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 11:14:42.257796    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:14:42.257796    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:42.259655    5908 sshutil.go:53] new ssh client: &{IP:172.29.195.173 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m02\id_rsa Username:docker}
	I0127 11:14:42.284725    5908 main.go:141] libmachine: [stdout =====>] : 172.29.195.173
	
	I0127 11:14:42.284791    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:42.285304    5908 sshutil.go:53] new ssh client: &{IP:172.29.195.173 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m02\id_rsa Username:docker}
	I0127 11:14:42.353969    5908 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8665184s)
	W0127 11:14:42.354046    5908 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0127 11:14:42.371984    5908 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.8754784s)
	W0127 11:14:42.372067    5908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 11:14:42.383144    5908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:14:42.416772    5908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 11:14:42.416772    5908 start.go:495] detecting cgroup driver to use...
	I0127 11:14:42.416772    5908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:14:42.467809    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	W0127 11:14:42.477030    5908 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0127 11:14:42.477030    5908 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0127 11:14:42.504896    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 11:14:42.525153    5908 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 11:14:42.536185    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 11:14:42.566795    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 11:14:42.598027    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 11:14:42.628581    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 11:14:42.658239    5908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:14:42.687286    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 11:14:42.714325    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 11:14:42.743149    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 11:14:42.778199    5908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:14:42.799580    5908 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 11:14:42.812647    5908 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 11:14:42.842140    5908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:14:42.866189    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:14:43.056639    5908 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 11:14:43.089996    5908 start.go:495] detecting cgroup driver to use...
	I0127 11:14:43.101263    5908 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0127 11:14:43.134780    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:14:43.168305    5908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 11:14:43.207241    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:14:43.239046    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 11:14:43.273294    5908 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 11:14:43.330357    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 11:14:43.352586    5908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:14:43.393283    5908 ssh_runner.go:195] Run: which cri-dockerd
	I0127 11:14:43.408902    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0127 11:14:43.427457    5908 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0127 11:14:43.466740    5908 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0127 11:14:43.655015    5908 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0127 11:14:43.864872    5908 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0127 11:14:43.864983    5908 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0127 11:14:43.908832    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:14:44.109837    5908 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0127 11:14:46.691962    5908 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5820983s)
	I0127 11:14:46.703337    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0127 11:14:46.736111    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0127 11:14:46.768059    5908 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0127 11:14:46.948019    5908 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0127 11:14:47.157589    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:14:47.357758    5908 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0127 11:14:47.395355    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0127 11:14:47.426466    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:14:47.615998    5908 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0127 11:14:47.724875    5908 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0127 11:14:47.735628    5908 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0127 11:14:47.744491    5908 start.go:563] Will wait 60s for crictl version
	I0127 11:14:47.755086    5908 ssh_runner.go:195] Run: which crictl
	I0127 11:14:47.771798    5908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:14:47.835071    5908 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0127 11:14:47.844512    5908 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 11:14:47.890277    5908 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 11:14:47.930823    5908 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0127 11:14:47.935065    5908 out.go:177]   - env NO_PROXY=172.29.192.249
	I0127 11:14:47.937658    5908 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0127 11:14:47.941705    5908 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0127 11:14:47.941705    5908 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0127 11:14:47.941705    5908 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0127 11:14:47.941705    5908 ip.go:211] Found interface: {Index:17 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:43:05:a6 Flags:up|broadcast|multicast|running}
	I0127 11:14:47.944705    5908 ip.go:214] interface addr: fe80::8ceb:a58b:811a:7c79/64
	I0127 11:14:47.944705    5908 ip.go:214] interface addr: 172.29.192.1/20
	I0127 11:14:47.957078    5908 ssh_runner.go:195] Run: grep 172.29.192.1	host.minikube.internal$ /etc/hosts
	I0127 11:14:47.964299    5908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:14:47.984649    5908 mustload.go:65] Loading cluster: ha-011400
	I0127 11:14:47.984775    5908 config.go:182] Loaded profile config "ha-011400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 11:14:47.985849    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:14:49.983173    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:14:49.983173    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:49.983645    5908 host.go:66] Checking if "ha-011400" exists ...
	I0127 11:14:49.986429    5908 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400 for IP: 172.29.195.173
	I0127 11:14:49.986497    5908 certs.go:194] generating shared ca certs ...
	I0127 11:14:49.986497    5908 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:14:49.987265    5908 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0127 11:14:49.987572    5908 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0127 11:14:49.987837    5908 certs.go:256] generating profile certs ...
	I0127 11:14:49.988013    5908 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\client.key
	I0127 11:14:49.988558    5908 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key.01c75780
	I0127 11:14:49.988746    5908 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt.01c75780 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.29.192.249 172.29.195.173 172.29.207.254]
	I0127 11:14:50.209513    5908 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt.01c75780 ...
	I0127 11:14:50.209513    5908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt.01c75780: {Name:mk2dd436a578522815aab4ccec2d6480bc93b80b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:14:50.211226    5908 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key.01c75780 ...
	I0127 11:14:50.211226    5908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key.01c75780: {Name:mkcaa48240e7c60511aea566a82f2f37f1d4033b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:14:50.212167    5908 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt.01c75780 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt
	I0127 11:14:50.228840    5908 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key.01c75780 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key
	I0127 11:14:50.229623    5908 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.key
	I0127 11:14:50.229623    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0127 11:14:50.229623    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0127 11:14:50.230421    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0127 11:14:50.230458    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0127 11:14:50.230668    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0127 11:14:50.230801    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0127 11:14:50.230801    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0127 11:14:50.231366    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0127 11:14:50.231366    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem (1338 bytes)
	W0127 11:14:50.232208    5908 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956_empty.pem, impossibly tiny 0 bytes
	I0127 11:14:50.232398    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0127 11:14:50.232666    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0127 11:14:50.233132    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0127 11:14:50.233132    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0127 11:14:50.234151    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem (1708 bytes)
	I0127 11:14:50.234151    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:14:50.234701    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem -> /usr/share/ca-certificates/5956.pem
	I0127 11:14:50.234978    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> /usr/share/ca-certificates/59562.pem
	I0127 11:14:50.235029    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:14:52.318789    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:14:52.318789    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:52.318863    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:14:54.756279    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:14:54.756727    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:14:54.756783    5908 sshutil.go:53] new ssh client: &{IP:172.29.192.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\id_rsa Username:docker}
	I0127 11:14:54.857617    5908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0127 11:14:54.864847    5908 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0127 11:14:54.899656    5908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0127 11:14:54.907708    5908 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0127 11:14:54.937929    5908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0127 11:14:54.947372    5908 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0127 11:14:54.975175    5908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0127 11:14:54.983883    5908 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0127 11:14:55.014804    5908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0127 11:14:55.021130    5908 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0127 11:14:55.056421    5908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0127 11:14:55.063003    5908 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0127 11:14:55.080731    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:14:55.130642    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 11:14:55.177266    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:14:55.223142    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 11:14:55.265417    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0127 11:14:55.310758    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 11:14:55.356618    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:14:55.409870    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 11:14:55.457580    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:14:55.504504    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem --> /usr/share/ca-certificates/5956.pem (1338 bytes)
	I0127 11:14:55.551850    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem --> /usr/share/ca-certificates/59562.pem (1708 bytes)
	I0127 11:14:55.597832    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0127 11:14:55.633740    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0127 11:14:55.666109    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0127 11:14:55.696343    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0127 11:14:55.725631    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0127 11:14:55.756277    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0127 11:14:55.787169    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0127 11:14:55.828452    5908 ssh_runner.go:195] Run: openssl version
	I0127 11:14:55.848061    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5956.pem && ln -fs /usr/share/ca-certificates/5956.pem /etc/ssl/certs/5956.pem"
	I0127 11:14:55.876854    5908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5956.pem
	I0127 11:14:55.884725    5908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:52 /usr/share/ca-certificates/5956.pem
	I0127 11:14:55.897240    5908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5956.pem
	I0127 11:14:55.917841    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5956.pem /etc/ssl/certs/51391683.0"
	I0127 11:14:55.949207    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59562.pem && ln -fs /usr/share/ca-certificates/59562.pem /etc/ssl/certs/59562.pem"
	I0127 11:14:55.980437    5908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59562.pem
	I0127 11:14:55.988207    5908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:52 /usr/share/ca-certificates/59562.pem
	I0127 11:14:55.998808    5908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59562.pem
	I0127 11:14:56.020281    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59562.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:14:56.053161    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:14:56.086377    5908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:14:56.092530    5908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:14:56.103036    5908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:14:56.123053    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:14:56.155730    5908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:14:56.162092    5908 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 11:14:56.162398    5908 kubeadm.go:934] updating node {m02 172.29.195.173 8443 v1.32.1 docker true true} ...
	I0127 11:14:56.162514    5908 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-011400-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.195.173
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:ha-011400 Namespace:default APIServerHAVIP:172.29.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 11:14:56.162720    5908 kube-vip.go:115] generating kube-vip config ...
	I0127 11:14:56.174864    5908 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0127 11:14:56.204828    5908 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0127 11:14:56.204828    5908 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.29.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.9
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0127 11:14:56.215922    5908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 11:14:56.233146    5908 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.1': No such file or directory
	
	Initiating transfer...
	I0127 11:14:56.245910    5908 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.1
	I0127 11:14:56.275297    5908 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl
	I0127 11:14:56.275431    5908 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet
	I0127 11:14:56.275431    5908 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm
	I0127 11:14:57.516950    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:14:57.550483    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet -> /var/lib/minikube/binaries/v1.32.1/kubelet
	I0127 11:14:57.560869    5908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet
	I0127 11:14:57.567867    5908 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubelet': No such file or directory
	I0127 11:14:57.567867    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet --> /var/lib/minikube/binaries/v1.32.1/kubelet (77398276 bytes)
	I0127 11:14:57.590891    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm -> /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0127 11:14:57.601877    5908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0127 11:14:57.672716    5908 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubeadm': No such file or directory
	I0127 11:14:57.672969    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm --> /var/lib/minikube/binaries/v1.32.1/kubeadm (70942872 bytes)
	I0127 11:14:57.839401    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl -> /var/lib/minikube/binaries/v1.32.1/kubectl
	I0127 11:14:57.850235    5908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl
	I0127 11:14:57.870924    5908 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubectl': No such file or directory
	I0127 11:14:57.870924    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl --> /var/lib/minikube/binaries/v1.32.1/kubectl (57323672 bytes)
	I0127 11:14:59.174722    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0127 11:14:59.194211    5908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0127 11:14:59.227352    5908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:14:59.256896    5908 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0127 11:14:59.302312    5908 ssh_runner.go:195] Run: grep 172.29.207.254	control-plane.minikube.internal$ /etc/hosts
	I0127 11:14:59.309138    5908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:14:59.340210    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:14:59.551867    5908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:14:59.585050    5908 host.go:66] Checking if "ha-011400" exists ...
	I0127 11:14:59.585947    5908 start.go:317] joinCluster: &{Name:ha-011400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-011400 Namespace:default APIServerHAVIP:172.
29.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.192.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.195.173 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\
jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:14:59.586142    5908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0127 11:14:59.586268    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:15:01.757153    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:15:01.757153    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:15:01.757346    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:15:04.319045    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:15:04.319226    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:15:04.319345    5908 sshutil.go:53] new ssh client: &{IP:172.29.192.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\id_rsa Username:docker}
	I0127 11:15:04.786583    5908 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0": (5.2002402s)
	I0127 11:15:04.786687    5908 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.29.195.173 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 11:15:04.786777    5908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7etute.u7301vj52t2o46lo --discovery-token-ca-cert-hash sha256:7b53ba02e26824ededdf08178373023b65bda8005ddad46edfe91cb6a3cb8d3f --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-011400-m02 --control-plane --apiserver-advertise-address=172.29.195.173 --apiserver-bind-port=8443"
	I0127 11:15:45.670358    5908 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7etute.u7301vj52t2o46lo --discovery-token-ca-cert-hash sha256:7b53ba02e26824ededdf08178373023b65bda8005ddad46edfe91cb6a3cb8d3f --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-011400-m02 --control-plane --apiserver-advertise-address=172.29.195.173 --apiserver-bind-port=8443": (40.8831562s)
	I0127 11:15:45.670485    5908 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0127 11:15:46.448867    5908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-011400-m02 minikube.k8s.io/updated_at=2025_01_27T11_15_46_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650 minikube.k8s.io/name=ha-011400 minikube.k8s.io/primary=false
	I0127 11:15:46.695142    5908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-011400-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0127 11:15:46.834632    5908 start.go:319] duration metric: took 47.2481943s to joinCluster
	I0127 11:15:46.834632    5908 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.29.195.173 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 11:15:46.835457    5908 config.go:182] Loaded profile config "ha-011400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 11:15:46.839318    5908 out.go:177] * Verifying Kubernetes components...
	I0127 11:15:46.853307    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:15:47.228955    5908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:15:47.268969    5908 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0127 11:15:47.270252    5908 kapi.go:59] client config for ha-011400: &rest.Config{Host:"https://172.29.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-011400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-011400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x301e420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0127 11:15:47.270431    5908 kubeadm.go:483] Overriding stale ClientConfig host https://172.29.207.254:8443 with https://172.29.192.249:8443
	I0127 11:15:47.271131    5908 node_ready.go:35] waiting up to 6m0s for node "ha-011400-m02" to be "Ready" ...
	I0127 11:15:47.271131    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:47.271670    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:47.271670    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:47.271730    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:47.295386    5908 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0127 11:15:47.771798    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:47.771798    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:47.771798    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:47.771798    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:47.779743    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:15:48.271933    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:48.271933    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:48.271933    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:48.271933    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:48.280934    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:15:48.772238    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:48.772238    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:48.772238    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:48.772238    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:48.778224    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:15:49.272041    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:49.272041    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:49.272041    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:49.272041    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:49.278846    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:15:49.279696    5908 node_ready.go:53] node "ha-011400-m02" has status "Ready":"False"
	I0127 11:15:49.772041    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:49.772041    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:49.772041    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:49.772041    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:49.778009    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:15:50.271984    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:50.271984    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:50.271984    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:50.271984    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:50.276654    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:15:50.771689    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:50.771830    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:50.771830    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:50.771830    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:50.777784    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:15:51.271599    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:51.271599    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:51.271599    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:51.271599    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:51.613797    5908 round_trippers.go:574] Response Status: 200 OK in 342 milliseconds
	I0127 11:15:51.614912    5908 node_ready.go:53] node "ha-011400-m02" has status "Ready":"False"
	I0127 11:15:51.771528    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:51.771586    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:51.771586    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:51.771586    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:51.916471    5908 round_trippers.go:574] Response Status: 200 OK in 144 milliseconds
	I0127 11:15:52.271299    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:52.271299    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:52.271299    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:52.271299    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:52.284030    5908 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0127 11:15:52.771644    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:52.771644    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:52.771644    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:52.771644    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:52.776799    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:15:53.272529    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:53.272529    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:53.272529    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:53.272529    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:53.279392    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:15:53.772241    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:53.772371    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:53.772371    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:53.772371    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:53.778625    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:15:53.779717    5908 node_ready.go:53] node "ha-011400-m02" has status "Ready":"False"
	I0127 11:15:54.271751    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:54.271751    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:54.271751    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:54.271751    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:54.277676    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:15:54.771626    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:54.771626    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:54.771626    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:54.771626    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:54.783650    5908 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0127 11:15:55.272859    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:55.272859    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:55.272859    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:55.272859    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:55.277241    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:15:55.772313    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:55.772313    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:55.772313    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:55.772313    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:55.781274    5908 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0127 11:15:55.782556    5908 node_ready.go:53] node "ha-011400-m02" has status "Ready":"False"
	I0127 11:15:56.271996    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:56.272413    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:56.272413    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:56.272413    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:56.277325    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:15:56.772396    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:56.772396    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:56.772396    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:56.772396    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:56.777347    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:15:57.271406    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:57.271406    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:57.271406    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:57.271406    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:57.276114    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:15:57.771796    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:57.771796    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:57.771796    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:57.771796    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:57.777155    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:15:58.271727    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:58.271727    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:58.271727    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:58.271727    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:58.285643    5908 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0127 11:15:58.286074    5908 node_ready.go:53] node "ha-011400-m02" has status "Ready":"False"
	I0127 11:15:58.772266    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:58.772266    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:58.772266    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:58.772266    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:58.778316    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:15:59.272186    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:59.272186    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:59.272186    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:59.272186    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:59.276635    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:15:59.771427    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:15:59.771427    5908 round_trippers.go:469] Request Headers:
	I0127 11:15:59.771427    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:15:59.771427    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:15:59.777032    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:16:00.272536    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:00.272613    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:00.272613    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:00.272613    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:00.279414    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:16:00.772119    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:00.772185    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:00.772185    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:00.772185    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:00.778138    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:16:00.778971    5908 node_ready.go:53] node "ha-011400-m02" has status "Ready":"False"
	I0127 11:16:01.272002    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:01.272002    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:01.272002    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:01.272002    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:01.277104    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:16:01.772296    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:01.772296    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:01.772296    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:01.772296    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:01.777636    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:16:02.272592    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:02.272592    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:02.272592    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:02.272592    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:02.277341    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:16:02.771798    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:02.771798    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:02.771798    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:02.771798    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:02.776701    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:16:03.272612    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:03.272612    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:03.272612    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:03.272757    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:03.277893    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:16:03.278603    5908 node_ready.go:53] node "ha-011400-m02" has status "Ready":"False"
	I0127 11:16:03.771520    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:03.771520    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:03.771520    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:03.771520    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:03.777757    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:16:04.272436    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:04.272436    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:04.272436    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:04.272436    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:04.278936    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:16:04.771987    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:04.772042    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:04.772042    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:04.772042    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:04.780046    5908 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0127 11:16:05.272322    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:05.272393    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:05.272393    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:05.272393    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:05.277324    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:16:05.771998    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:05.771998    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:05.771998    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:05.771998    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:05.778294    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:16:05.779962    5908 node_ready.go:53] node "ha-011400-m02" has status "Ready":"False"
	I0127 11:16:06.272466    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:06.272466    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:06.272466    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:06.272466    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:06.278170    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:16:06.771864    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:06.771864    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:06.771864    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:06.771864    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:06.777203    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:16:07.272649    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:07.272649    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:07.272649    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:07.272649    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:07.286576    5908 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0127 11:16:07.771820    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:07.771820    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:07.771820    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:07.771820    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:07.780255    5908 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0127 11:16:07.780379    5908 node_ready.go:53] node "ha-011400-m02" has status "Ready":"False"
	I0127 11:16:08.272368    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:08.272368    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:08.272368    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:08.272368    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:08.276298    5908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 11:16:08.278161    5908 node_ready.go:49] node "ha-011400-m02" has status "Ready":"True"
	I0127 11:16:08.278161    5908 node_ready.go:38] duration metric: took 21.0068112s for node "ha-011400-m02" to be "Ready" ...
	I0127 11:16:08.278161    5908 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:16:08.278324    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods
	I0127 11:16:08.278324    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:08.278388    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:08.278388    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:08.284651    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:16:08.293309    5908 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-228t7" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:08.293309    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-228t7
	I0127 11:16:08.293309    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:08.293309    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:08.293309    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:08.297673    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:16:08.298928    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:16:08.298928    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:08.298928    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:08.298928    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:08.303240    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:16:08.304146    5908 pod_ready.go:93] pod "coredns-668d6bf9bc-228t7" in "kube-system" namespace has status "Ready":"True"
	I0127 11:16:08.304146    5908 pod_ready.go:82] duration metric: took 10.8363ms for pod "coredns-668d6bf9bc-228t7" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:08.304146    5908 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-8b9xh" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:08.304369    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-8b9xh
	I0127 11:16:08.304369    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:08.304369    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:08.304369    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:08.307871    5908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 11:16:08.309175    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:16:08.309224    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:08.309274    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:08.309274    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:08.313091    5908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 11:16:08.314449    5908 pod_ready.go:93] pod "coredns-668d6bf9bc-8b9xh" in "kube-system" namespace has status "Ready":"True"
	I0127 11:16:08.314632    5908 pod_ready.go:82] duration metric: took 10.3754ms for pod "coredns-668d6bf9bc-8b9xh" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:08.314632    5908 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:08.314744    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-011400
	I0127 11:16:08.314845    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:08.314845    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:08.314845    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:08.318642    5908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 11:16:08.319427    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:16:08.319427    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:08.319427    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:08.319427    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:08.324767    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:16:08.324930    5908 pod_ready.go:93] pod "etcd-ha-011400" in "kube-system" namespace has status "Ready":"True"
	I0127 11:16:08.324930    5908 pod_ready.go:82] duration metric: took 10.2974ms for pod "etcd-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:08.324930    5908 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:08.324930    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-011400-m02
	I0127 11:16:08.325460    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:08.325460    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:08.325460    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:08.334598    5908 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0127 11:16:08.334598    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:08.335231    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:08.335231    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:08.335231    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:08.340937    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:16:08.341907    5908 pod_ready.go:93] pod "etcd-ha-011400-m02" in "kube-system" namespace has status "Ready":"True"
	I0127 11:16:08.341907    5908 pod_ready.go:82] duration metric: took 16.9769ms for pod "etcd-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:08.342004    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:08.472056    5908 request.go:632] Waited for 129.9962ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-011400
	I0127 11:16:08.472056    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-011400
	I0127 11:16:08.472513    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:08.472544    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:08.472544    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:08.476974    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:16:08.672915    5908 request.go:632] Waited for 194.9814ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:16:08.673256    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:16:08.673256    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:08.673256    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:08.673256    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:08.678938    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:16:08.679763    5908 pod_ready.go:93] pod "kube-apiserver-ha-011400" in "kube-system" namespace has status "Ready":"True"
	I0127 11:16:08.679920    5908 pod_ready.go:82] duration metric: took 337.8775ms for pod "kube-apiserver-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:08.679920    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:08.872398    5908 request.go:632] Waited for 192.4042ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-011400-m02
	I0127 11:16:08.872398    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-011400-m02
	I0127 11:16:08.872398    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:08.872398    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:08.872398    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:08.878014    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:16:09.072446    5908 request.go:632] Waited for 193.3336ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:09.072446    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:09.072446    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:09.072446    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:09.072446    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:09.079783    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:16:09.080845    5908 pod_ready.go:93] pod "kube-apiserver-ha-011400-m02" in "kube-system" namespace has status "Ready":"True"
	I0127 11:16:09.080901    5908 pod_ready.go:82] duration metric: took 400.9213ms for pod "kube-apiserver-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:09.080901    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:09.272762    5908 request.go:632] Waited for 191.7266ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-011400
	I0127 11:16:09.273279    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-011400
	I0127 11:16:09.273279    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:09.273279    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:09.273279    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:09.278678    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:16:09.472021    5908 request.go:632] Waited for 192.6179ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:16:09.472301    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:16:09.472301    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:09.472301    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:09.472301    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:09.479797    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:16:09.480828    5908 pod_ready.go:93] pod "kube-controller-manager-ha-011400" in "kube-system" namespace has status "Ready":"True"
	I0127 11:16:09.480911    5908 pod_ready.go:82] duration metric: took 400.0068ms for pod "kube-controller-manager-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:09.480911    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:09.672283    5908 request.go:632] Waited for 191.3691ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-011400-m02
	I0127 11:16:09.672728    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-011400-m02
	I0127 11:16:09.672728    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:09.672728    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:09.672728    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:09.678599    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:16:09.872281    5908 request.go:632] Waited for 192.7668ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:09.872281    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:09.872781    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:09.872972    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:09.873062    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:09.878833    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:16:09.878961    5908 pod_ready.go:93] pod "kube-controller-manager-ha-011400-m02" in "kube-system" namespace has status "Ready":"True"
	I0127 11:16:09.878961    5908 pod_ready.go:82] duration metric: took 398.0457ms for pod "kube-controller-manager-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:09.878961    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hg72m" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:10.072092    5908 request.go:632] Waited for 193.1289ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg72m
	I0127 11:16:10.072092    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg72m
	I0127 11:16:10.072092    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:10.072092    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:10.072092    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:10.079764    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:16:10.272650    5908 request.go:632] Waited for 191.8681ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:16:10.272650    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:16:10.273177    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:10.273216    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:10.273216    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:10.278267    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:16:10.279104    5908 pod_ready.go:93] pod "kube-proxy-hg72m" in "kube-system" namespace has status "Ready":"True"
	I0127 11:16:10.279104    5908 pod_ready.go:82] duration metric: took 400.1388ms for pod "kube-proxy-hg72m" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:10.279223    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x52km" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:10.472068    5908 request.go:632] Waited for 192.8433ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x52km
	I0127 11:16:10.472068    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x52km
	I0127 11:16:10.472068    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:10.472068    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:10.472068    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:10.477030    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:16:10.672769    5908 request.go:632] Waited for 194.7103ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:10.673207    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:10.673241    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:10.673283    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:10.673283    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:10.681044    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:16:10.681652    5908 pod_ready.go:93] pod "kube-proxy-x52km" in "kube-system" namespace has status "Ready":"True"
	I0127 11:16:10.681652    5908 pod_ready.go:82] duration metric: took 402.4255ms for pod "kube-proxy-x52km" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:10.681652    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:10.873194    5908 request.go:632] Waited for 191.5393ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-011400
	I0127 11:16:10.873194    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-011400
	I0127 11:16:10.873194    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:10.873194    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:10.873194    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:10.878158    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:16:11.073007    5908 request.go:632] Waited for 193.0754ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:16:11.073311    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:16:11.073311    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:11.073385    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:11.073385    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:11.079942    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:16:11.080807    5908 pod_ready.go:93] pod "kube-scheduler-ha-011400" in "kube-system" namespace has status "Ready":"True"
	I0127 11:16:11.080807    5908 pod_ready.go:82] duration metric: took 399.1503ms for pod "kube-scheduler-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:11.080807    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:11.272419    5908 request.go:632] Waited for 191.6099ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-011400-m02
	I0127 11:16:11.272419    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-011400-m02
	I0127 11:16:11.272419    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:11.272419    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:11.272419    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:11.278551    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:16:11.472203    5908 request.go:632] Waited for 192.9058ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:11.472203    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:16:11.472203    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:11.472203    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:11.472203    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:11.477496    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:16:11.478524    5908 pod_ready.go:93] pod "kube-scheduler-ha-011400-m02" in "kube-system" namespace has status "Ready":"True"
	I0127 11:16:11.478524    5908 pod_ready.go:82] duration metric: took 397.7132ms for pod "kube-scheduler-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:16:11.478524    5908 pod_ready.go:39] duration metric: took 3.2002488s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:16:11.478609    5908 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:16:11.489806    5908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:16:11.515702    5908 api_server.go:72] duration metric: took 24.6808131s to wait for apiserver process to appear ...
	I0127 11:16:11.515749    5908 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:16:11.515749    5908 api_server.go:253] Checking apiserver healthz at https://172.29.192.249:8443/healthz ...
	I0127 11:16:11.532748    5908 api_server.go:279] https://172.29.192.249:8443/healthz returned 200:
	ok
	I0127 11:16:11.532876    5908 round_trippers.go:463] GET https://172.29.192.249:8443/version
	I0127 11:16:11.532950    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:11.532950    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:11.532950    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:11.536422    5908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 11:16:11.536422    5908 api_server.go:141] control plane version: v1.32.1
	I0127 11:16:11.536422    5908 api_server.go:131] duration metric: took 20.6721ms to wait for apiserver health ...
	I0127 11:16:11.536422    5908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:16:11.673267    5908 request.go:632] Waited for 136.8444ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods
	I0127 11:16:11.673267    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods
	I0127 11:16:11.673267    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:11.673267    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:11.673267    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:11.680282    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:16:11.688159    5908 system_pods.go:59] 17 kube-system pods found
	I0127 11:16:11.688159    5908 system_pods.go:61] "coredns-668d6bf9bc-228t7" [ac40dfec-9e9f-4414-9259-a7dadfb2c93d] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "coredns-668d6bf9bc-8b9xh" [647a1e55-d5ce-4f2b-933f-8caf13d7463b] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "etcd-ha-011400" [90238c1c-70b2-47e8-9bab-49f2334ca4b3] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "etcd-ha-011400-m02" [fcda2776-bc47-47af-948a-94e549a41fec] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "kindnet-fs97j" [d480fa1c-808e-4c5d-818e-26281dca23d4] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "kindnet-ll5br" [6a2a0fea-258a-4593-8445-398f37e379e4] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "kube-apiserver-ha-011400" [7bda282c-7cb1-46f1-9bb8-366bc992aaed] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "kube-apiserver-ha-011400-m02" [8e5dcd2c-fbca-473d-8aa2-70e7fb8866c7] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "kube-controller-manager-ha-011400" [1b8e425d-03da-4d95-86e5-e1e6f15b64bd] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "kube-controller-manager-ha-011400-m02" [c20cfbe1-337f-462f-968f-c19741634ac4] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "kube-proxy-hg72m" [dc860339-d169-452b-9621-170ae73c7a5e] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "kube-proxy-x52km" [0a6cc7f2-2b15-4db1-b5fb-d6448d4bd295] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "kube-scheduler-ha-011400" [35220ede-c59f-4d24-88c5-728088af2abf] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "kube-scheduler-ha-011400-m02" [250614b6-0c08-4e8a-a080-58253b81d4f7] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "kube-vip-ha-011400" [31c47527-c1fe-4064-bcb4-faffcedab1f4] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "kube-vip-ha-011400-m02" [1e3bf93b-caab-4f37-a8bd-36f0ad76eb4c] Running
	I0127 11:16:11.688159    5908 system_pods.go:61] "storage-provisioner" [2755d063-0183-41c1-9fe8-e533017aef39] Running
	I0127 11:16:11.688159    5908 system_pods.go:74] duration metric: took 151.7354ms to wait for pod list to return data ...
	I0127 11:16:11.688159    5908 default_sa.go:34] waiting for default service account to be created ...
	I0127 11:16:11.873149    5908 request.go:632] Waited for 184.9887ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/default/serviceaccounts
	I0127 11:16:11.873476    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/default/serviceaccounts
	I0127 11:16:11.873476    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:11.873476    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:11.873476    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:11.879842    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:16:11.879842    5908 default_sa.go:45] found service account: "default"
	I0127 11:16:11.879842    5908 default_sa.go:55] duration metric: took 191.6813ms for default service account to be created ...
	I0127 11:16:11.879842    5908 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 11:16:12.072143    5908 request.go:632] Waited for 192.2991ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods
	I0127 11:16:12.072362    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods
	I0127 11:16:12.072362    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:12.072362    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:12.072362    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:12.080025    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:16:12.086589    5908 system_pods.go:87] 17 kube-system pods found
	I0127 11:16:12.086673    5908 system_pods.go:105] "coredns-668d6bf9bc-228t7" [ac40dfec-9e9f-4414-9259-a7dadfb2c93d] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "coredns-668d6bf9bc-8b9xh" [647a1e55-d5ce-4f2b-933f-8caf13d7463b] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "etcd-ha-011400" [90238c1c-70b2-47e8-9bab-49f2334ca4b3] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "etcd-ha-011400-m02" [fcda2776-bc47-47af-948a-94e549a41fec] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "kindnet-fs97j" [d480fa1c-808e-4c5d-818e-26281dca23d4] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "kindnet-ll5br" [6a2a0fea-258a-4593-8445-398f37e379e4] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "kube-apiserver-ha-011400" [7bda282c-7cb1-46f1-9bb8-366bc992aaed] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "kube-apiserver-ha-011400-m02" [8e5dcd2c-fbca-473d-8aa2-70e7fb8866c7] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "kube-controller-manager-ha-011400" [1b8e425d-03da-4d95-86e5-e1e6f15b64bd] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "kube-controller-manager-ha-011400-m02" [c20cfbe1-337f-462f-968f-c19741634ac4] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "kube-proxy-hg72m" [dc860339-d169-452b-9621-170ae73c7a5e] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "kube-proxy-x52km" [0a6cc7f2-2b15-4db1-b5fb-d6448d4bd295] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "kube-scheduler-ha-011400" [35220ede-c59f-4d24-88c5-728088af2abf] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "kube-scheduler-ha-011400-m02" [250614b6-0c08-4e8a-a080-58253b81d4f7] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "kube-vip-ha-011400" [31c47527-c1fe-4064-bcb4-faffcedab1f4] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "kube-vip-ha-011400-m02" [1e3bf93b-caab-4f37-a8bd-36f0ad76eb4c] Running
	I0127 11:16:12.086673    5908 system_pods.go:105] "storage-provisioner" [2755d063-0183-41c1-9fe8-e533017aef39] Running
	I0127 11:16:12.086673    5908 system_pods.go:147] duration metric: took 206.8292ms to wait for k8s-apps to be running ...
	I0127 11:16:12.086673    5908 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 11:16:12.097723    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:16:12.123285    5908 system_svc.go:56] duration metric: took 36.6119ms WaitForService to wait for kubelet
	I0127 11:16:12.123285    5908 kubeadm.go:582] duration metric: took 25.28839s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 11:16:12.123285    5908 node_conditions.go:102] verifying NodePressure condition ...
	I0127 11:16:12.273572    5908 request.go:632] Waited for 150.2846ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes
	I0127 11:16:12.273572    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes
	I0127 11:16:12.273572    5908 round_trippers.go:469] Request Headers:
	I0127 11:16:12.273572    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:16:12.273572    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:16:12.280318    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:16:12.281994    5908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 11:16:12.282059    5908 node_conditions.go:123] node cpu capacity is 2
	I0127 11:16:12.282059    5908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 11:16:12.282059    5908 node_conditions.go:123] node cpu capacity is 2
	I0127 11:16:12.282059    5908 node_conditions.go:105] duration metric: took 158.772ms to run NodePressure ...
	I0127 11:16:12.282059    5908 start.go:241] waiting for startup goroutines ...
	I0127 11:16:12.282151    5908 start.go:255] writing updated cluster config ...
	I0127 11:16:12.290415    5908 out.go:201] 
	I0127 11:16:12.313429    5908 config.go:182] Loaded profile config "ha-011400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 11:16:12.313429    5908 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\config.json ...
	I0127 11:16:12.327008    5908 out.go:177] * Starting "ha-011400-m03" control-plane node in "ha-011400" cluster
	I0127 11:16:12.330319    5908 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0127 11:16:12.330319    5908 cache.go:56] Caching tarball of preloaded images
	I0127 11:16:12.331471    5908 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 11:16:12.331471    5908 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0127 11:16:12.332109    5908 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\config.json ...
	I0127 11:16:12.334747    5908 start.go:360] acquireMachinesLock for ha-011400-m03: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 11:16:12.334945    5908 start.go:364] duration metric: took 115.7µs to acquireMachinesLock for "ha-011400-m03"
	I0127 11:16:12.335231    5908 start.go:93] Provisioning new machine with config: &{Name:ha-011400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-011400 Namespace:def
ault APIServerHAVIP:172.29.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.192.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.195.173 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false i
stio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 11:16:12.335447    5908 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0127 11:16:12.339826    5908 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0127 11:16:12.339826    5908 start.go:159] libmachine.API.Create for "ha-011400" (driver="hyperv")
	I0127 11:16:12.339826    5908 client.go:168] LocalClient.Create starting
	I0127 11:16:12.340648    5908 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0127 11:16:12.341303    5908 main.go:141] libmachine: Decoding PEM data...
	I0127 11:16:12.341303    5908 main.go:141] libmachine: Parsing certificate...
	I0127 11:16:12.341303    5908 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0127 11:16:12.342016    5908 main.go:141] libmachine: Decoding PEM data...
	I0127 11:16:12.342016    5908 main.go:141] libmachine: Parsing certificate...
	I0127 11:16:12.342016    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0127 11:16:14.200229    5908 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0127 11:16:14.201076    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:14.201076    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0127 11:16:15.919387    5908 main.go:141] libmachine: [stdout =====>] : False
	
	I0127 11:16:15.919387    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:15.919886    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0127 11:16:17.390618    5908 main.go:141] libmachine: [stdout =====>] : True
	
	I0127 11:16:17.391728    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:17.391728    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0127 11:16:21.059517    5908 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0127 11:16:21.060377    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:21.062408    5908 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 11:16:21.594430    5908 main.go:141] libmachine: Creating SSH key...
	I0127 11:16:21.805933    5908 main.go:141] libmachine: Creating VM...
	I0127 11:16:21.806868    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0127 11:16:24.691501    5908 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0127 11:16:24.691869    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:24.691924    5908 main.go:141] libmachine: Using switch "Default Switch"
	I0127 11:16:24.691924    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0127 11:16:26.525666    5908 main.go:141] libmachine: [stdout =====>] : True
	
	I0127 11:16:26.526422    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:26.526422    5908 main.go:141] libmachine: Creating VHD
	I0127 11:16:26.526516    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0127 11:16:30.309479    5908 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 4E7FCE85-94FA-4073-A6ED-9004DCC96862
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0127 11:16:30.309479    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:30.309479    5908 main.go:141] libmachine: Writing magic tar header
	I0127 11:16:30.309479    5908 main.go:141] libmachine: Writing SSH key tar header
	I0127 11:16:30.321404    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0127 11:16:33.502890    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:16:33.503628    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:33.503872    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m03\disk.vhd' -SizeBytes 20000MB
	I0127 11:16:36.003207    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:16:36.003207    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:36.003207    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-011400-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0127 11:16:39.593101    5908 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-011400-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0127 11:16:39.593101    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:39.593101    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-011400-m03 -DynamicMemoryEnabled $false
	I0127 11:16:41.813108    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:16:41.813108    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:41.813819    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-011400-m03 -Count 2
	I0127 11:16:43.954430    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:16:43.954430    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:43.954607    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-011400-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m03\boot2docker.iso'
	I0127 11:16:46.524866    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:16:46.525605    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:46.525731    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-011400-m03 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m03\disk.vhd'
	I0127 11:16:49.182904    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:16:49.182904    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:49.182904    5908 main.go:141] libmachine: Starting VM...
	I0127 11:16:49.182904    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-011400-m03
	I0127 11:16:52.203004    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:16:52.203004    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:52.203004    5908 main.go:141] libmachine: Waiting for host to start...
	I0127 11:16:52.203004    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:16:54.486804    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:16:54.487829    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:54.487921    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:16:56.964271    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:16:56.964271    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:16:57.965198    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:17:00.212655    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:17:00.212655    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:00.212655    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:17:02.719878    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:17:02.719947    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:03.721167    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:17:05.884562    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:17:05.884562    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:05.885561    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:17:08.370763    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:17:08.370763    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:09.371503    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:17:11.584706    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:17:11.584706    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:11.584706    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:17:14.063448    5908 main.go:141] libmachine: [stdout =====>] : 
	I0127 11:17:14.063448    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:15.064040    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:17:17.251342    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:17:17.251342    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:17.251342    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:17:19.827445    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:17:19.827445    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:19.827445    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:17:21.901824    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:17:21.901824    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:21.901824    5908 machine.go:93] provisionDockerMachine start ...
	I0127 11:17:21.902488    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:17:24.036887    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:17:24.037910    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:24.037984    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:17:26.532863    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:17:26.532863    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:26.538403    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:17:26.539152    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.196.110 22 <nil> <nil>}
	I0127 11:17:26.539152    5908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 11:17:26.667484    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 11:17:26.667484    5908 buildroot.go:166] provisioning hostname "ha-011400-m03"
	I0127 11:17:26.667484    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:17:28.740837    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:17:28.741867    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:28.741867    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:17:31.283621    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:17:31.283621    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:31.289147    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:17:31.289228    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.196.110 22 <nil> <nil>}
	I0127 11:17:31.289228    5908 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-011400-m03 && echo "ha-011400-m03" | sudo tee /etc/hostname
	I0127 11:17:31.433918    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-011400-m03
	
	I0127 11:17:31.433918    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:17:33.567824    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:17:33.567824    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:33.567974    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:17:36.024568    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:17:36.025430    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:36.030387    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:17:36.031007    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.196.110 22 <nil> <nil>}
	I0127 11:17:36.031007    5908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-011400-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-011400-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-011400-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 11:17:36.166976    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 11:17:36.166976    5908 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0127 11:17:36.167055    5908 buildroot.go:174] setting up certificates
	I0127 11:17:36.167145    5908 provision.go:84] configureAuth start
	I0127 11:17:36.167200    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:17:38.238464    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:17:38.238464    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:38.238464    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:17:40.734427    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:17:40.734427    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:40.734427    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:17:42.861668    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:17:42.861668    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:42.861668    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:17:45.409676    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:17:45.409676    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:45.409779    5908 provision.go:143] copyHostCerts
	I0127 11:17:45.410025    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0127 11:17:45.410282    5908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0127 11:17:45.410352    5908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0127 11:17:45.410749    5908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0127 11:17:45.411366    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0127 11:17:45.412125    5908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0127 11:17:45.412188    5908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0127 11:17:45.412188    5908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0127 11:17:45.413596    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0127 11:17:45.413935    5908 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0127 11:17:45.414042    5908 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0127 11:17:45.414376    5908 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0127 11:17:45.415302    5908 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-011400-m03 san=[127.0.0.1 172.29.196.110 ha-011400-m03 localhost minikube]
	I0127 11:17:45.516982    5908 provision.go:177] copyRemoteCerts
	I0127 11:17:45.529791    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 11:17:45.529869    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:17:47.695524    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:17:47.696157    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:47.696426    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:17:50.223462    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:17:50.223657    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:50.224146    5908 sshutil.go:53] new ssh client: &{IP:172.29.196.110 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m03\id_rsa Username:docker}
	I0127 11:17:50.328064    5908 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7981447s)
	I0127 11:17:50.328064    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0127 11:17:50.328749    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 11:17:50.382178    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0127 11:17:50.382756    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0127 11:17:50.436584    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0127 11:17:50.437058    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 11:17:50.487769    5908 provision.go:87] duration metric: took 14.3204751s to configureAuth
	I0127 11:17:50.487769    5908 buildroot.go:189] setting minikube options for container-runtime
	I0127 11:17:50.488660    5908 config.go:182] Loaded profile config "ha-011400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 11:17:50.488887    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:17:52.582668    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:17:52.582668    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:52.582668    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:17:55.083340    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:17:55.083848    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:55.091466    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:17:55.092349    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.196.110 22 <nil> <nil>}
	I0127 11:17:55.092349    5908 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0127 11:17:55.216448    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0127 11:17:55.216448    5908 buildroot.go:70] root file system type: tmpfs
	I0127 11:17:55.217230    5908 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0127 11:17:55.217230    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:17:57.300448    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:17:57.300448    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:57.300448    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:17:59.821708    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:17:59.821708    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:17:59.827994    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:17:59.828700    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.196.110 22 <nil> <nil>}
	I0127 11:17:59.828700    5908 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.29.192.249"
	Environment="NO_PROXY=172.29.192.249,172.29.195.173"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0127 11:17:59.975249    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.29.192.249
	Environment=NO_PROXY=172.29.192.249,172.29.195.173
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0127 11:17:59.975345    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:18:02.087930    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:18:02.087930    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:02.087930    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:18:04.614202    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:18:04.614202    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:04.620291    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:18:04.620291    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.196.110 22 <nil> <nil>}
	I0127 11:18:04.620291    5908 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0127 11:18:06.816543    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0127 11:18:06.816543    5908 machine.go:96] duration metric: took 44.9142518s to provisionDockerMachine
	I0127 11:18:06.816543    5908 client.go:171] duration metric: took 1m54.4750018s to LocalClient.Create
	I0127 11:18:06.816543    5908 start.go:167] duration metric: took 1m54.4755264s to libmachine.API.Create "ha-011400"
	I0127 11:18:06.816543    5908 start.go:293] postStartSetup for "ha-011400-m03" (driver="hyperv")
	I0127 11:18:06.816543    5908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 11:18:06.832104    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 11:18:06.832104    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:18:08.930234    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:18:08.930234    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:08.931246    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:18:11.453563    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:18:11.453563    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:11.454799    5908 sshutil.go:53] new ssh client: &{IP:172.29.196.110 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m03\id_rsa Username:docker}
	I0127 11:18:11.554257    5908 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7221039s)
	I0127 11:18:11.567178    5908 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 11:18:11.576853    5908 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 11:18:11.576853    5908 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0127 11:18:11.576853    5908 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0127 11:18:11.578732    5908 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> 59562.pem in /etc/ssl/certs
	I0127 11:18:11.578732    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> /etc/ssl/certs/59562.pem
	I0127 11:18:11.591843    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 11:18:11.611510    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem --> /etc/ssl/certs/59562.pem (1708 bytes)
	I0127 11:18:11.657489    5908 start.go:296] duration metric: took 4.8408949s for postStartSetup
	I0127 11:18:11.660484    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:18:13.781966    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:18:13.782395    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:13.782395    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:18:16.343174    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:18:16.343174    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:16.343174    5908 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\config.json ...
	I0127 11:18:16.346162    5908 start.go:128] duration metric: took 2m4.0094253s to createHost
	I0127 11:18:16.346245    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:18:18.470069    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:18:18.470883    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:18.470883    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:18:21.024362    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:18:21.024362    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:21.029192    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:18:21.029522    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.196.110 22 <nil> <nil>}
	I0127 11:18:21.029522    5908 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 11:18:21.156459    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737976701.168097346
	
	I0127 11:18:21.156519    5908 fix.go:216] guest clock: 1737976701.168097346
	I0127 11:18:21.156519    5908 fix.go:229] Guest: 2025-01-27 11:18:21.168097346 +0000 UTC Remote: 2025-01-27 11:18:16.3462458 +0000 UTC m=+549.204315501 (delta=4.821851546s)
	I0127 11:18:21.156637    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:18:23.237155    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:18:23.237155    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:23.237389    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:18:25.749658    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:18:25.749658    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:25.757370    5908 main.go:141] libmachine: Using SSH client type: native
	I0127 11:18:25.758118    5908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.196.110 22 <nil> <nil>}
	I0127 11:18:25.758118    5908 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1737976701
	I0127 11:18:25.895516    5908 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 27 11:18:21 UTC 2025
	
	I0127 11:18:25.895516    5908 fix.go:236] clock set: Mon Jan 27 11:18:21 UTC 2025
	 (err=<nil>)
	I0127 11:18:25.895626    5908 start.go:83] releasing machines lock for "ha-011400-m03", held for 2m13.5591827s
	I0127 11:18:25.895833    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:18:28.027116    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:18:28.027699    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:28.027699    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:18:30.572231    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:18:30.572731    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:30.575389    5908 out.go:177] * Found network options:
	I0127 11:18:30.578132    5908 out.go:177]   - NO_PROXY=172.29.192.249,172.29.195.173
	W0127 11:18:30.581096    5908 proxy.go:119] fail to check proxy env: Error ip not in block
	W0127 11:18:30.581161    5908 proxy.go:119] fail to check proxy env: Error ip not in block
	I0127 11:18:30.583464    5908 out.go:177]   - NO_PROXY=172.29.192.249,172.29.195.173
	W0127 11:18:30.585672    5908 proxy.go:119] fail to check proxy env: Error ip not in block
	W0127 11:18:30.585672    5908 proxy.go:119] fail to check proxy env: Error ip not in block
	W0127 11:18:30.587072    5908 proxy.go:119] fail to check proxy env: Error ip not in block
	W0127 11:18:30.587072    5908 proxy.go:119] fail to check proxy env: Error ip not in block
	I0127 11:18:30.588998    5908 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0127 11:18:30.588998    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:18:30.598210    5908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0127 11:18:30.599208    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:18:32.834834    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:18:32.834834    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:32.835176    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:18:32.837233    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:18:32.837286    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:32.837286    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:18:35.533454    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:18:35.533454    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:35.533454    5908 sshutil.go:53] new ssh client: &{IP:172.29.196.110 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m03\id_rsa Username:docker}
	I0127 11:18:35.558170    5908 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:18:35.558170    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:35.558170    5908 sshutil.go:53] new ssh client: &{IP:172.29.196.110 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m03\id_rsa Username:docker}
	I0127 11:18:35.625022    5908 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0257627s)
	W0127 11:18:35.625022    5908 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 11:18:35.635998    5908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 11:18:35.640983    5908 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0519325s)
	W0127 11:18:35.640983    5908 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0127 11:18:35.673937    5908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 11:18:35.674685    5908 start.go:495] detecting cgroup driver to use...
	I0127 11:18:35.674821    5908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:18:35.720910    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	W0127 11:18:35.755071    5908 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0127 11:18:35.755071    5908 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0127 11:18:35.757378    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 11:18:35.778461    5908 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 11:18:35.789284    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 11:18:35.826146    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 11:18:35.855760    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 11:18:35.886604    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 11:18:35.917791    5908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 11:18:35.948291    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 11:18:35.977945    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 11:18:36.005895    5908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 11:18:36.037877    5908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 11:18:36.059735    5908 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 11:18:36.068755    5908 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 11:18:36.101734    5908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 11:18:36.129217    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:18:36.315439    5908 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 11:18:36.347515    5908 start.go:495] detecting cgroup driver to use...
	I0127 11:18:36.359561    5908 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0127 11:18:36.393846    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:18:36.430207    5908 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 11:18:36.474166    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 11:18:36.517092    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 11:18:36.552566    5908 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 11:18:36.616099    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 11:18:36.638165    5908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 11:18:36.682044    5908 ssh_runner.go:195] Run: which cri-dockerd
	I0127 11:18:36.698338    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0127 11:18:36.713902    5908 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0127 11:18:36.757998    5908 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0127 11:18:36.941426    5908 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0127 11:18:37.143118    5908 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0127 11:18:37.143118    5908 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0127 11:18:37.187492    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:18:37.388588    5908 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0127 11:18:40.006840    5908 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6180814s)
	I0127 11:18:40.017391    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0127 11:18:40.056297    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0127 11:18:40.097301    5908 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0127 11:18:40.307447    5908 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0127 11:18:40.497962    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:18:40.687858    5908 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0127 11:18:40.726944    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0127 11:18:40.760817    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:18:40.956069    5908 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0127 11:18:41.063705    5908 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0127 11:18:41.075042    5908 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0127 11:18:41.085844    5908 start.go:563] Will wait 60s for crictl version
	I0127 11:18:41.096015    5908 ssh_runner.go:195] Run: which crictl
	I0127 11:18:41.112680    5908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 11:18:41.172025    5908 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0127 11:18:41.180513    5908 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 11:18:41.234832    5908 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 11:18:41.272197    5908 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0127 11:18:41.274350    5908 out.go:177]   - env NO_PROXY=172.29.192.249
	I0127 11:18:41.277320    5908 out.go:177]   - env NO_PROXY=172.29.192.249,172.29.195.173
	I0127 11:18:41.279225    5908 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0127 11:18:41.283840    5908 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0127 11:18:41.283840    5908 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0127 11:18:41.283840    5908 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0127 11:18:41.283840    5908 ip.go:211] Found interface: {Index:17 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:43:05:a6 Flags:up|broadcast|multicast|running}
	I0127 11:18:41.285844    5908 ip.go:214] interface addr: fe80::8ceb:a58b:811a:7c79/64
	I0127 11:18:41.286854    5908 ip.go:214] interface addr: 172.29.192.1/20
	I0127 11:18:41.295850    5908 ssh_runner.go:195] Run: grep 172.29.192.1	host.minikube.internal$ /etc/hosts
	I0127 11:18:41.302330    5908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:18:41.322658    5908 mustload.go:65] Loading cluster: ha-011400
	I0127 11:18:41.323051    5908 config.go:182] Loaded profile config "ha-011400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 11:18:41.324301    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:18:43.374374    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:18:43.374395    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:43.374458    5908 host.go:66] Checking if "ha-011400" exists ...
	I0127 11:18:43.375251    5908 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400 for IP: 172.29.196.110
	I0127 11:18:43.375310    5908 certs.go:194] generating shared ca certs ...
	I0127 11:18:43.375310    5908 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:18:43.376124    5908 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0127 11:18:43.376183    5908 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0127 11:18:43.376800    5908 certs.go:256] generating profile certs ...
	I0127 11:18:43.377609    5908 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\client.key
	I0127 11:18:43.377769    5908 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key.95379c4e
	I0127 11:18:43.377932    5908 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt.95379c4e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.29.192.249 172.29.195.173 172.29.196.110 172.29.207.254]
	I0127 11:18:43.439771    5908 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt.95379c4e ...
	I0127 11:18:43.439771    5908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt.95379c4e: {Name:mk259769d2cf026cbf29030ab02d7f34cba67948 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:18:43.441727    5908 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key.95379c4e ...
	I0127 11:18:43.441727    5908 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key.95379c4e: {Name:mk73aadb8faa148e2210f77a4ec90c72b4380bab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 11:18:43.442331    5908 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt.95379c4e -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt
	I0127 11:18:43.459737    5908 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key.95379c4e -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key
	I0127 11:18:43.462086    5908 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.key
	I0127 11:18:43.462086    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0127 11:18:43.462336    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0127 11:18:43.462367    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0127 11:18:43.462367    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0127 11:18:43.462367    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0127 11:18:43.462902    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0127 11:18:43.463130    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0127 11:18:43.463130    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0127 11:18:43.463814    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem (1338 bytes)
	W0127 11:18:43.464239    5908 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956_empty.pem, impossibly tiny 0 bytes
	I0127 11:18:43.464304    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0127 11:18:43.464487    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0127 11:18:43.464487    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0127 11:18:43.465346    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0127 11:18:43.465948    5908 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem (1708 bytes)
	I0127 11:18:43.466145    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:18:43.466145    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem -> /usr/share/ca-certificates/5956.pem
	I0127 11:18:43.466145    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> /usr/share/ca-certificates/59562.pem
	I0127 11:18:43.466900    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:18:45.585331    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:18:45.586315    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:45.586348    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:18:48.114322    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:18:48.114322    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:48.115040    5908 sshutil.go:53] new ssh client: &{IP:172.29.192.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\id_rsa Username:docker}
	I0127 11:18:48.212093    5908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0127 11:18:48.220405    5908 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0127 11:18:48.262107    5908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0127 11:18:48.269487    5908 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0127 11:18:48.300011    5908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0127 11:18:48.306781    5908 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0127 11:18:48.336555    5908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0127 11:18:48.342391    5908 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0127 11:18:48.377038    5908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0127 11:18:48.382465    5908 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0127 11:18:48.412335    5908 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0127 11:18:48.419247    5908 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0127 11:18:48.440851    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 11:18:48.491100    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 11:18:48.544655    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 11:18:48.595899    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 11:18:48.639160    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0127 11:18:48.681807    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 11:18:48.736417    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 11:18:48.780956    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\ha-011400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 11:18:48.827494    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 11:18:48.870321    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem --> /usr/share/ca-certificates/5956.pem (1338 bytes)
	I0127 11:18:48.913969    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem --> /usr/share/ca-certificates/59562.pem (1708 bytes)
	I0127 11:18:48.959350    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0127 11:18:48.992255    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0127 11:18:49.023130    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0127 11:18:49.058398    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0127 11:18:49.095190    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0127 11:18:49.132504    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0127 11:18:49.166443    5908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0127 11:18:49.210486    5908 ssh_runner.go:195] Run: openssl version
	I0127 11:18:49.229491    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 11:18:49.261055    5908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:18:49.268095    5908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:18:49.278341    5908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 11:18:49.297067    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 11:18:49.325322    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5956.pem && ln -fs /usr/share/ca-certificates/5956.pem /etc/ssl/certs/5956.pem"
	I0127 11:18:49.356829    5908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5956.pem
	I0127 11:18:49.363729    5908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:52 /usr/share/ca-certificates/5956.pem
	I0127 11:18:49.375542    5908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5956.pem
	I0127 11:18:49.396012    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5956.pem /etc/ssl/certs/51391683.0"
	I0127 11:18:49.426299    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59562.pem && ln -fs /usr/share/ca-certificates/59562.pem /etc/ssl/certs/59562.pem"
	I0127 11:18:49.454038    5908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59562.pem
	I0127 11:18:49.460991    5908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:52 /usr/share/ca-certificates/59562.pem
	I0127 11:18:49.469669    5908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59562.pem
	I0127 11:18:49.489462    5908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59562.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 11:18:49.523024    5908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 11:18:49.529707    5908 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 11:18:49.530046    5908 kubeadm.go:934] updating node {m03 172.29.196.110 8443 v1.32.1 docker true true} ...
	I0127 11:18:49.530229    5908 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-011400-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.196.110
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:ha-011400 Namespace:default APIServerHAVIP:172.29.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 11:18:49.530229    5908 kube-vip.go:115] generating kube-vip config ...
	I0127 11:18:49.535614    5908 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0127 11:18:49.569689    5908 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0127 11:18:49.569794    5908 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.29.207.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.9
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0127 11:18:49.580005    5908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 11:18:49.598363    5908 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.1': No such file or directory
	
	Initiating transfer...
	I0127 11:18:49.609583    5908 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.1
	I0127 11:18:49.626913    5908 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
	I0127 11:18:49.627047    5908 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet.sha256
	I0127 11:18:49.627112    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl -> /var/lib/minikube/binaries/v1.32.1/kubectl
	I0127 11:18:49.627112    5908 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm.sha256
	I0127 11:18:49.627289    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm -> /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0127 11:18:49.639074    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:18:49.640078    5908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0127 11:18:49.640078    5908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl
	I0127 11:18:49.661571    5908 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet -> /var/lib/minikube/binaries/v1.32.1/kubelet
	I0127 11:18:49.661571    5908 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubectl': No such file or directory
	I0127 11:18:49.661571    5908 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubeadm': No such file or directory
	I0127 11:18:49.661571    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl --> /var/lib/minikube/binaries/v1.32.1/kubectl (57323672 bytes)
	I0127 11:18:49.661571    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm --> /var/lib/minikube/binaries/v1.32.1/kubeadm (70942872 bytes)
	I0127 11:18:49.673541    5908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet
	I0127 11:18:49.747787    5908 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubelet': No such file or directory
	I0127 11:18:49.748057    5908 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet --> /var/lib/minikube/binaries/v1.32.1/kubelet (77398276 bytes)
	I0127 11:18:50.939592    5908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0127 11:18:50.957571    5908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0127 11:18:50.987122    5908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 11:18:51.022957    5908 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0127 11:18:51.075819    5908 ssh_runner.go:195] Run: grep 172.29.207.254	control-plane.minikube.internal$ /etc/hosts
	I0127 11:18:51.082289    5908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.207.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 11:18:51.113530    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:18:51.321859    5908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:18:51.353879    5908 host.go:66] Checking if "ha-011400" exists ...
	I0127 11:18:51.354667    5908 start.go:317] joinCluster: &{Name:ha-011400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:ha-011400 Namespace:default APIServerHAVIP:172.
29.207.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.192.249 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.195.173 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.29.196.110 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false
istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 11:18:51.355019    5908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0127 11:18:51.355019    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:18:53.419586    5908 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:18:53.419586    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:53.419586    5908 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:18:55.948211    5908 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:18:55.948211    5908 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:18:55.948738    5908 sshutil.go:53] new ssh client: &{IP:172.29.192.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\id_rsa Username:docker}
	I0127 11:18:56.182381    5908 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.8272084s)
	I0127 11:18:56.182444    5908 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.29.196.110 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 11:18:56.182517    5908 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m39fjp.qb6jxdygv1llgskr --discovery-token-ca-cert-hash sha256:7b53ba02e26824ededdf08178373023b65bda8005ddad46edfe91cb6a3cb8d3f --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-011400-m03 --control-plane --apiserver-advertise-address=172.29.196.110 --apiserver-bind-port=8443"
	I0127 11:19:37.306899    5908 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token m39fjp.qb6jxdygv1llgskr --discovery-token-ca-cert-hash sha256:7b53ba02e26824ededdf08178373023b65bda8005ddad46edfe91cb6a3cb8d3f --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-011400-m03 --control-plane --apiserver-advertise-address=172.29.196.110 --apiserver-bind-port=8443": (41.1239539s)
	I0127 11:19:37.306899    5908 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0127 11:19:38.028485    5908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-011400-m03 minikube.k8s.io/updated_at=2025_01_27T11_19_38_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650 minikube.k8s.io/name=ha-011400 minikube.k8s.io/primary=false
	I0127 11:19:38.228063    5908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-011400-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0127 11:19:38.382897    5908 start.go:319] duration metric: took 47.0277407s to joinCluster
	I0127 11:19:38.383568    5908 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.29.196.110 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 11:19:38.384618    5908 config.go:182] Loaded profile config "ha-011400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 11:19:38.386477    5908 out.go:177] * Verifying Kubernetes components...
	I0127 11:19:38.400547    5908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 11:19:38.762570    5908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 11:19:38.800463    5908 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0127 11:19:38.800948    5908 kapi.go:59] client config for ha-011400: &rest.Config{Host:"https://172.29.207.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-011400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\ha-011400\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x301e420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0127 11:19:38.800948    5908 kubeadm.go:483] Overriding stale ClientConfig host https://172.29.207.254:8443 with https://172.29.192.249:8443
	I0127 11:19:38.803364    5908 node_ready.go:35] waiting up to 6m0s for node "ha-011400-m03" to be "Ready" ...
	I0127 11:19:38.803547    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:38.803547    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:38.803598    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:38.803598    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:38.819512    5908 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0127 11:19:39.303981    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:39.303981    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:39.303981    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:39.303981    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:39.309661    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:19:39.803966    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:39.803966    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:39.803966    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:39.803966    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:39.810405    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:19:40.304063    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:40.304063    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:40.304063    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:40.304063    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:40.309590    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:19:40.804461    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:40.804461    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:40.804461    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:40.804461    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:40.808913    5908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 11:19:40.809555    5908 node_ready.go:53] node "ha-011400-m03" has status "Ready":"False"
	I0127 11:19:41.303596    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:41.303596    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:41.303596    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:41.303596    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:41.315883    5908 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0127 11:19:41.803764    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:41.803764    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:41.803764    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:41.803764    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:41.810008    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:19:42.303429    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:42.303429    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:42.303429    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:42.303429    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:42.312327    5908 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0127 11:19:42.805064    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:42.805064    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:42.805064    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:42.805064    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:42.810084    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:19:42.810487    5908 node_ready.go:53] node "ha-011400-m03" has status "Ready":"False"
	I0127 11:19:43.303849    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:43.303849    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:43.303849    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:43.303849    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:43.315192    5908 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0127 11:19:43.804272    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:43.804272    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:43.804272    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:43.804272    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:43.812730    5908 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0127 11:19:44.304498    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:44.304498    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:44.304498    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:44.304498    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:44.310686    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:19:44.803787    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:44.803787    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:44.803787    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:44.803787    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:44.809772    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:19:45.305399    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:45.305458    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:45.305458    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:45.305458    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:45.374520    5908 round_trippers.go:574] Response Status: 200 OK in 69 milliseconds
	I0127 11:19:45.376533    5908 node_ready.go:53] node "ha-011400-m03" has status "Ready":"False"
	I0127 11:19:45.803682    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:45.804168    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:45.804168    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:45.804168    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:45.812576    5908 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0127 11:19:46.305337    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:46.305337    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:46.305337    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:46.305337    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:46.310664    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:19:46.803622    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:46.803622    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:46.803622    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:46.803622    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:46.808629    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:19:47.305048    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:47.305048    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:47.305048    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:47.305048    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:47.310706    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:19:47.803545    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:47.803545    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:47.803545    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:47.803545    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:47.809891    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:19:47.814521    5908 node_ready.go:53] node "ha-011400-m03" has status "Ready":"False"
	I0127 11:19:48.303693    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:48.303693    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:48.303693    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:48.303693    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:48.309415    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:19:48.803953    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:48.803953    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:48.803953    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:48.803953    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:48.809672    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:19:49.304690    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:49.304690    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:49.304690    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:49.304690    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:49.312229    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:19:49.804432    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:49.804432    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:49.804432    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:49.804432    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:49.821104    5908 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0127 11:19:49.823150    5908 node_ready.go:53] node "ha-011400-m03" has status "Ready":"False"
	I0127 11:19:50.303569    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:50.303569    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:50.303569    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:50.303569    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:50.309945    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:19:50.804512    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:50.804860    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:50.804860    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:50.804860    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:50.810278    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:19:51.304460    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:51.304460    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:51.304460    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:51.304460    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:51.312163    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:19:51.804468    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:51.804468    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:51.804468    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:51.804468    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:51.810630    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:19:52.303605    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:52.303605    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:52.303605    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:52.303605    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:52.312356    5908 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0127 11:19:52.313217    5908 node_ready.go:53] node "ha-011400-m03" has status "Ready":"False"
	I0127 11:19:52.804596    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:52.804596    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:52.804596    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:52.804596    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:52.828103    5908 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0127 11:19:53.304153    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:53.304153    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:53.304153    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:53.304153    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:53.309179    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:19:53.804526    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:53.804526    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:53.804526    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:53.804526    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:53.810810    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:19:54.305035    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:54.305035    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:54.305035    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:54.305035    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:54.312428    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:19:54.805332    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:54.805409    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:54.805409    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:54.805409    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:54.811249    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:19:54.811567    5908 node_ready.go:53] node "ha-011400-m03" has status "Ready":"False"
	I0127 11:19:55.304919    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:55.304999    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:55.304999    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:55.304999    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:55.310514    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:19:55.804009    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:55.804009    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:55.804009    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:55.804009    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:55.812796    5908 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0127 11:19:56.304046    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:56.304046    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:56.304046    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:56.304046    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:56.309360    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:19:56.804247    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:56.804247    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:56.804247    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:56.804247    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:56.809025    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:19:57.304547    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:57.304547    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:57.304547    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:57.304547    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:57.311593    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:19:57.312280    5908 node_ready.go:53] node "ha-011400-m03" has status "Ready":"False"
	I0127 11:19:57.804770    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:57.804770    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:57.804770    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:57.804770    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:57.809962    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:19:58.304459    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:58.305046    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:58.305046    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:58.305046    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:58.309238    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:19:58.804626    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:58.804626    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:58.804626    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:58.804626    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:58.809810    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:19:59.303778    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:59.304299    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:59.304299    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:59.304299    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:59.314794    5908 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0127 11:19:59.315531    5908 node_ready.go:53] node "ha-011400-m03" has status "Ready":"False"
	I0127 11:19:59.804536    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:19:59.804610    5908 round_trippers.go:469] Request Headers:
	I0127 11:19:59.804610    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:19:59.804610    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:19:59.810319    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:20:00.305856    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:20:00.305941    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:00.305941    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:00.305941    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:00.313341    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:20:00.804074    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:20:00.804074    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:00.804074    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:00.804074    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:00.809241    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:20:01.304092    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:20:01.304525    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:01.304525    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:01.304525    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:01.313882    5908 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0127 11:20:01.314650    5908 node_ready.go:49] node "ha-011400-m03" has status "Ready":"True"
	I0127 11:20:01.314682    5908 node_ready.go:38] duration metric: took 22.5110841s for node "ha-011400-m03" to be "Ready" ...
	I0127 11:20:01.314682    5908 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:20:01.314838    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods
	I0127 11:20:01.314867    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:01.314867    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:01.314867    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:01.328003    5908 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0127 11:20:01.342489    5908 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-228t7" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:01.342489    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-228t7
	I0127 11:20:01.342489    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:01.342489    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:01.342489    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:01.351583    5908 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0127 11:20:01.353402    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:20:01.353402    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:01.353402    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:01.353402    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:01.357687    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:20:01.358436    5908 pod_ready.go:93] pod "coredns-668d6bf9bc-228t7" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:01.358436    5908 pod_ready.go:82] duration metric: took 15.9462ms for pod "coredns-668d6bf9bc-228t7" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:01.358482    5908 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-8b9xh" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:01.358559    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-8b9xh
	I0127 11:20:01.358599    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:01.358656    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:01.358656    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:01.361821    5908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 11:20:01.362809    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:20:01.363525    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:01.363525    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:01.363525    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:01.367103    5908 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 11:20:01.368528    5908 pod_ready.go:93] pod "coredns-668d6bf9bc-8b9xh" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:01.368557    5908 pod_ready.go:82] duration metric: took 10.075ms for pod "coredns-668d6bf9bc-8b9xh" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:01.368557    5908 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:01.368702    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-011400
	I0127 11:20:01.368729    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:01.368729    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:01.368729    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:01.375491    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:20:01.376325    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:20:01.376398    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:01.376398    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:01.376398    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:01.387958    5908 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0127 11:20:01.388814    5908 pod_ready.go:93] pod "etcd-ha-011400" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:01.388866    5908 pod_ready.go:82] duration metric: took 20.2607ms for pod "etcd-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:01.388936    5908 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:01.389070    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-011400-m02
	I0127 11:20:01.389131    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:01.389131    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:01.389131    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:01.403080    5908 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0127 11:20:01.403611    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:20:01.403611    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:01.403611    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:01.403611    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:01.410333    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:20:01.410887    5908 pod_ready.go:93] pod "etcd-ha-011400-m02" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:01.410887    5908 pod_ready.go:82] duration metric: took 21.9508ms for pod "etcd-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:01.410887    5908 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-011400-m03" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:01.504559    5908 request.go:632] Waited for 93.6717ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-011400-m03
	I0127 11:20:01.504559    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/etcd-ha-011400-m03
	I0127 11:20:01.504559    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:01.504559    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:01.504559    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:01.511245    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:20:01.704573    5908 request.go:632] Waited for 192.5179ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:20:01.704573    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:20:01.704573    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:01.704573    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:01.704573    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:01.709835    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:20:01.710388    5908 pod_ready.go:93] pod "etcd-ha-011400-m03" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:01.710388    5908 pod_ready.go:82] duration metric: took 299.4987ms for pod "etcd-ha-011400-m03" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:01.710619    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:01.904530    5908 request.go:632] Waited for 193.8001ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-011400
	I0127 11:20:01.904530    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-011400
	I0127 11:20:01.904530    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:01.904530    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:01.904530    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:01.912352    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:20:02.105191    5908 request.go:632] Waited for 191.2307ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:20:02.105191    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:20:02.105191    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:02.105191    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:02.105191    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:02.110692    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:20:02.112177    5908 pod_ready.go:93] pod "kube-apiserver-ha-011400" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:02.112329    5908 pod_ready.go:82] duration metric: took 401.7066ms for pod "kube-apiserver-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:02.112329    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:02.304481    5908 request.go:632] Waited for 192.0174ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-011400-m02
	I0127 11:20:02.304481    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-011400-m02
	I0127 11:20:02.304481    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:02.304481    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:02.304481    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:02.314091    5908 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0127 11:20:02.504998    5908 request.go:632] Waited for 189.9145ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:20:02.505487    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:20:02.505591    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:02.505591    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:02.505591    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:02.511151    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:20:02.512156    5908 pod_ready.go:93] pod "kube-apiserver-ha-011400-m02" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:02.512259    5908 pod_ready.go:82] duration metric: took 399.9251ms for pod "kube-apiserver-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:02.512259    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-011400-m03" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:02.704273    5908 request.go:632] Waited for 192.0123ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-011400-m03
	I0127 11:20:02.704273    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-011400-m03
	I0127 11:20:02.704273    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:02.704273    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:02.704273    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:02.709726    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:20:02.904922    5908 request.go:632] Waited for 193.6838ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:20:02.904922    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:20:02.905288    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:02.905288    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:02.905288    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:02.910395    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:20:02.911006    5908 pod_ready.go:93] pod "kube-apiserver-ha-011400-m03" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:02.911118    5908 pod_ready.go:82] duration metric: took 398.8054ms for pod "kube-apiserver-ha-011400-m03" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:02.911118    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:03.104160    5908 request.go:632] Waited for 192.9377ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-011400
	I0127 11:20:03.104160    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-011400
	I0127 11:20:03.104537    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:03.104537    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:03.104537    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:03.110469    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:20:03.304170    5908 request.go:632] Waited for 192.4881ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:20:03.304170    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:20:03.304170    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:03.304170    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:03.304170    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:03.310524    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:20:03.311315    5908 pod_ready.go:93] pod "kube-controller-manager-ha-011400" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:03.311315    5908 pod_ready.go:82] duration metric: took 400.1922ms for pod "kube-controller-manager-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:03.311315    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:03.504341    5908 request.go:632] Waited for 192.8403ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-011400-m02
	I0127 11:20:03.504924    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-011400-m02
	I0127 11:20:03.504959    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:03.504959    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:03.505002    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:03.509914    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:20:03.704519    5908 request.go:632] Waited for 193.6267ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:20:03.704849    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:20:03.704849    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:03.704849    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:03.704849    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:03.711623    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:20:03.712641    5908 pod_ready.go:93] pod "kube-controller-manager-ha-011400-m02" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:03.712641    5908 pod_ready.go:82] duration metric: took 401.3223ms for pod "kube-controller-manager-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:03.712811    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-011400-m03" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:03.904799    5908 request.go:632] Waited for 191.867ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-011400-m03
	I0127 11:20:03.904799    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-011400-m03
	I0127 11:20:03.904799    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:03.904799    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:03.904799    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:03.914159    5908 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0127 11:20:04.104369    5908 request.go:632] Waited for 188.6075ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:20:04.104760    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:20:04.104760    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:04.104760    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:04.104760    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:04.109110    5908 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 11:20:04.109662    5908 pod_ready.go:93] pod "kube-controller-manager-ha-011400-m03" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:04.109662    5908 pod_ready.go:82] duration metric: took 396.8471ms for pod "kube-controller-manager-ha-011400-m03" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:04.109662    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4pjv8" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:04.304912    5908 request.go:632] Waited for 195.2482ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4pjv8
	I0127 11:20:04.305329    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4pjv8
	I0127 11:20:04.305329    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:04.305329    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:04.305329    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:04.310771    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:20:04.504737    5908 request.go:632] Waited for 193.0176ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:20:04.504737    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:20:04.504737    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:04.504737    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:04.504737    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:04.519628    5908 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0127 11:20:04.520605    5908 pod_ready.go:93] pod "kube-proxy-4pjv8" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:04.520605    5908 pod_ready.go:82] duration metric: took 410.939ms for pod "kube-proxy-4pjv8" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:04.520605    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hg72m" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:04.705917    5908 request.go:632] Waited for 185.3096ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg72m
	I0127 11:20:04.706419    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hg72m
	I0127 11:20:04.706466    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:04.706466    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:04.706466    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:04.712526    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:20:04.904658    5908 request.go:632] Waited for 191.1289ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:20:04.904658    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:20:04.904658    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:04.904658    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:04.904658    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:04.910591    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:20:04.911496    5908 pod_ready.go:93] pod "kube-proxy-hg72m" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:04.911603    5908 pod_ready.go:82] duration metric: took 390.9943ms for pod "kube-proxy-hg72m" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:04.911603    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x52km" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:05.104818    5908 request.go:632] Waited for 193.2123ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x52km
	I0127 11:20:05.104818    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x52km
	I0127 11:20:05.104818    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:05.104818    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:05.104818    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:05.110849    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:20:05.305207    5908 request.go:632] Waited for 193.0108ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:20:05.305207    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:20:05.305672    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:05.305672    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:05.305672    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:05.310840    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:20:05.311541    5908 pod_ready.go:93] pod "kube-proxy-x52km" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:05.311541    5908 pod_ready.go:82] duration metric: took 399.9337ms for pod "kube-proxy-x52km" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:05.311541    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:05.505053    5908 request.go:632] Waited for 193.5096ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-011400
	I0127 11:20:05.505053    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-011400
	I0127 11:20:05.505053    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:05.505053    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:05.505053    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:05.510263    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:20:05.704757    5908 request.go:632] Waited for 192.8756ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:20:05.704757    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400
	I0127 11:20:05.704757    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:05.705265    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:05.705265    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:05.710676    5908 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 11:20:05.711720    5908 pod_ready.go:93] pod "kube-scheduler-ha-011400" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:05.711787    5908 pod_ready.go:82] duration metric: took 400.2416ms for pod "kube-scheduler-ha-011400" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:05.711787    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:05.904640    5908 request.go:632] Waited for 192.7565ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-011400-m02
	I0127 11:20:05.904640    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-011400-m02
	I0127 11:20:05.904640    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:05.904640    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:05.904640    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:05.911198    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:20:06.104479    5908 request.go:632] Waited for 192.5249ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:20:06.104479    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m02
	I0127 11:20:06.104479    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:06.104479    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:06.104479    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:06.110487    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:20:06.111180    5908 pod_ready.go:93] pod "kube-scheduler-ha-011400-m02" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:06.111180    5908 pod_ready.go:82] duration metric: took 399.3887ms for pod "kube-scheduler-ha-011400-m02" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:06.111180    5908 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-011400-m03" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:06.305002    5908 request.go:632] Waited for 193.8203ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-011400-m03
	I0127 11:20:06.305002    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-011400-m03
	I0127 11:20:06.305002    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:06.305002    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:06.305002    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:06.312376    5908 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 11:20:06.505014    5908 request.go:632] Waited for 191.8398ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:20:06.505014    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes/ha-011400-m03
	I0127 11:20:06.505014    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:06.505014    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:06.505014    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:06.511220    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:20:06.511946    5908 pod_ready.go:93] pod "kube-scheduler-ha-011400-m03" in "kube-system" namespace has status "Ready":"True"
	I0127 11:20:06.512042    5908 pod_ready.go:82] duration metric: took 400.8579ms for pod "kube-scheduler-ha-011400-m03" in "kube-system" namespace to be "Ready" ...
	I0127 11:20:06.512042    5908 pod_ready.go:39] duration metric: took 5.1972597s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 11:20:06.512042    5908 api_server.go:52] waiting for apiserver process to appear ...
	I0127 11:20:06.522412    5908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:20:06.549068    5908 api_server.go:72] duration metric: took 28.1651244s to wait for apiserver process to appear ...
	I0127 11:20:06.549140    5908 api_server.go:88] waiting for apiserver healthz status ...
	I0127 11:20:06.549140    5908 api_server.go:253] Checking apiserver healthz at https://172.29.192.249:8443/healthz ...
	I0127 11:20:06.562145    5908 api_server.go:279] https://172.29.192.249:8443/healthz returned 200:
	ok
	I0127 11:20:06.562306    5908 round_trippers.go:463] GET https://172.29.192.249:8443/version
	I0127 11:20:06.562379    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:06.562406    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:06.562418    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:06.564037    5908 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0127 11:20:06.564167    5908 api_server.go:141] control plane version: v1.32.1
	I0127 11:20:06.564167    5908 api_server.go:131] duration metric: took 15.0274ms to wait for apiserver health ...
	I0127 11:20:06.564167    5908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 11:20:06.704449    5908 request.go:632] Waited for 140.2807ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods
	I0127 11:20:06.704917    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods
	I0127 11:20:06.704917    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:06.704917    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:06.704917    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:06.714722    5908 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0127 11:20:06.724819    5908 system_pods.go:59] 24 kube-system pods found
	I0127 11:20:06.724819    5908 system_pods.go:61] "coredns-668d6bf9bc-228t7" [ac40dfec-9e9f-4414-9259-a7dadfb2c93d] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "coredns-668d6bf9bc-8b9xh" [647a1e55-d5ce-4f2b-933f-8caf13d7463b] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "etcd-ha-011400" [90238c1c-70b2-47e8-9bab-49f2334ca4b3] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "etcd-ha-011400-m02" [fcda2776-bc47-47af-948a-94e549a41fec] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "etcd-ha-011400-m03" [2e852046-3be3-4615-a27f-0ec1a5673416] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kindnet-fs97j" [d480fa1c-808e-4c5d-818e-26281dca23d4] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kindnet-ll5br" [6a2a0fea-258a-4593-8445-398f37e379e4] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kindnet-mg445" [37787d9b-44c4-4e83-8d2c-e67333301fd1] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-apiserver-ha-011400" [7bda282c-7cb1-46f1-9bb8-366bc992aaed] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-apiserver-ha-011400-m02" [8e5dcd2c-fbca-473d-8aa2-70e7fb8866c7] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-apiserver-ha-011400-m03" [80fe2bca-85bb-4211-8792-5d59b5dab513] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-controller-manager-ha-011400" [1b8e425d-03da-4d95-86e5-e1e6f15b64bd] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-controller-manager-ha-011400-m02" [c20cfbe1-337f-462f-968f-c19741634ac4] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-controller-manager-ha-011400-m03" [ee7a8965-3fd5-41ee-980e-896aa7293038] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-proxy-4pjv8" [c0b28c82-50ac-4021-949d-75883580a018] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-proxy-hg72m" [dc860339-d169-452b-9621-170ae73c7a5e] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-proxy-x52km" [0a6cc7f2-2b15-4db1-b5fb-d6448d4bd295] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-scheduler-ha-011400" [35220ede-c59f-4d24-88c5-728088af2abf] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-scheduler-ha-011400-m02" [250614b6-0c08-4e8a-a080-58253b81d4f7] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-scheduler-ha-011400-m03" [ef2c825c-f959-4df5-afa0-f8e34a48aadf] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-vip-ha-011400" [31c47527-c1fe-4064-bcb4-faffcedab1f4] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-vip-ha-011400-m02" [1e3bf93b-caab-4f37-a8bd-36f0ad76eb4c] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "kube-vip-ha-011400-m03" [64122fe5-f88f-430b-8e9b-e06e18929823] Running
	I0127 11:20:06.724819    5908 system_pods.go:61] "storage-provisioner" [2755d063-0183-41c1-9fe8-e533017aef39] Running
	I0127 11:20:06.724819    5908 system_pods.go:74] duration metric: took 160.6507ms to wait for pod list to return data ...
	I0127 11:20:06.724819    5908 default_sa.go:34] waiting for default service account to be created ...
	I0127 11:20:06.904948    5908 request.go:632] Waited for 179.1715ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/default/serviceaccounts
	I0127 11:20:06.905447    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/default/serviceaccounts
	I0127 11:20:06.905447    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:06.905447    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:06.905447    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:06.911450    5908 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 11:20:06.911450    5908 default_sa.go:45] found service account: "default"
	I0127 11:20:06.911450    5908 default_sa.go:55] duration metric: took 186.6289ms for default service account to be created ...
	I0127 11:20:06.911450    5908 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 11:20:07.104790    5908 request.go:632] Waited for 193.3378ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods
	I0127 11:20:07.104790    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/namespaces/kube-system/pods
	I0127 11:20:07.104790    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:07.104790    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:07.105223    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:07.114365    5908 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0127 11:20:07.128330    5908 system_pods.go:87] 24 kube-system pods found
	I0127 11:20:07.128330    5908 system_pods.go:105] "coredns-668d6bf9bc-228t7" [ac40dfec-9e9f-4414-9259-a7dadfb2c93d] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "coredns-668d6bf9bc-8b9xh" [647a1e55-d5ce-4f2b-933f-8caf13d7463b] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "etcd-ha-011400" [90238c1c-70b2-47e8-9bab-49f2334ca4b3] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "etcd-ha-011400-m02" [fcda2776-bc47-47af-948a-94e549a41fec] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "etcd-ha-011400-m03" [2e852046-3be3-4615-a27f-0ec1a5673416] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kindnet-fs97j" [d480fa1c-808e-4c5d-818e-26281dca23d4] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kindnet-ll5br" [6a2a0fea-258a-4593-8445-398f37e379e4] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kindnet-mg445" [37787d9b-44c4-4e83-8d2c-e67333301fd1] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-apiserver-ha-011400" [7bda282c-7cb1-46f1-9bb8-366bc992aaed] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-apiserver-ha-011400-m02" [8e5dcd2c-fbca-473d-8aa2-70e7fb8866c7] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-apiserver-ha-011400-m03" [80fe2bca-85bb-4211-8792-5d59b5dab513] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-controller-manager-ha-011400" [1b8e425d-03da-4d95-86e5-e1e6f15b64bd] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-controller-manager-ha-011400-m02" [c20cfbe1-337f-462f-968f-c19741634ac4] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-controller-manager-ha-011400-m03" [ee7a8965-3fd5-41ee-980e-896aa7293038] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-proxy-4pjv8" [c0b28c82-50ac-4021-949d-75883580a018] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-proxy-hg72m" [dc860339-d169-452b-9621-170ae73c7a5e] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-proxy-x52km" [0a6cc7f2-2b15-4db1-b5fb-d6448d4bd295] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-scheduler-ha-011400" [35220ede-c59f-4d24-88c5-728088af2abf] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-scheduler-ha-011400-m02" [250614b6-0c08-4e8a-a080-58253b81d4f7] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-scheduler-ha-011400-m03" [ef2c825c-f959-4df5-afa0-f8e34a48aadf] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-vip-ha-011400" [31c47527-c1fe-4064-bcb4-faffcedab1f4] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-vip-ha-011400-m02" [1e3bf93b-caab-4f37-a8bd-36f0ad76eb4c] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "kube-vip-ha-011400-m03" [64122fe5-f88f-430b-8e9b-e06e18929823] Running
	I0127 11:20:07.128330    5908 system_pods.go:105] "storage-provisioner" [2755d063-0183-41c1-9fe8-e533017aef39] Running
	I0127 11:20:07.128330    5908 system_pods.go:147] duration metric: took 216.877ms to wait for k8s-apps to be running ...
	I0127 11:20:07.128330    5908 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 11:20:07.142916    5908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:20:07.170893    5908 system_svc.go:56] duration metric: took 42.5634ms WaitForService to wait for kubelet
	I0127 11:20:07.171013    5908 kubeadm.go:582] duration metric: took 28.7869977s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 11:20:07.171013    5908 node_conditions.go:102] verifying NodePressure condition ...
	I0127 11:20:07.304264    5908 request.go:632] Waited for 133.1263ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.192.249:8443/api/v1/nodes
	I0127 11:20:07.304264    5908 round_trippers.go:463] GET https://172.29.192.249:8443/api/v1/nodes
	I0127 11:20:07.304264    5908 round_trippers.go:469] Request Headers:
	I0127 11:20:07.304264    5908 round_trippers.go:473]     Accept: application/json, */*
	I0127 11:20:07.304264    5908 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 11:20:07.313973    5908 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0127 11:20:07.315325    5908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 11:20:07.315325    5908 node_conditions.go:123] node cpu capacity is 2
	I0127 11:20:07.315325    5908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 11:20:07.315325    5908 node_conditions.go:123] node cpu capacity is 2
	I0127 11:20:07.315325    5908 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 11:20:07.315325    5908 node_conditions.go:123] node cpu capacity is 2
	I0127 11:20:07.315325    5908 node_conditions.go:105] duration metric: took 144.3105ms to run NodePressure ...
	I0127 11:20:07.315325    5908 start.go:241] waiting for startup goroutines ...
	I0127 11:20:07.315325    5908 start.go:255] writing updated cluster config ...
	I0127 11:20:07.327405    5908 ssh_runner.go:195] Run: rm -f paused
	I0127 11:20:07.468457    5908 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 11:20:07.473338    5908 out.go:177] * Done! kubectl is now configured to use "ha-011400" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jan 27 11:12:32 ha-011400 dockerd[1448]: time="2025-01-27T11:12:32.791208816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 11:12:32 ha-011400 dockerd[1448]: time="2025-01-27T11:12:32.843556208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 27 11:12:32 ha-011400 dockerd[1448]: time="2025-01-27T11:12:32.843650809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 27 11:12:32 ha-011400 dockerd[1448]: time="2025-01-27T11:12:32.843671409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 11:12:32 ha-011400 dockerd[1448]: time="2025-01-27T11:12:32.843785910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 11:12:33 ha-011400 cri-dockerd[1342]: time="2025-01-27T11:12:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1935c738005cb77f13f750e1b189a2c871075e21c55639538224577889f20a82/resolv.conf as [nameserver 172.29.192.1]"
	Jan 27 11:12:33 ha-011400 cri-dockerd[1342]: time="2025-01-27T11:12:33Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3a60197906090b50c5485229f65e2090b0aa01f0f43bf2dd514c730b4ce5896f/resolv.conf as [nameserver 172.29.192.1]"
	Jan 27 11:12:33 ha-011400 dockerd[1448]: time="2025-01-27T11:12:33.453621263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 27 11:12:33 ha-011400 dockerd[1448]: time="2025-01-27T11:12:33.453836365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 27 11:12:33 ha-011400 dockerd[1448]: time="2025-01-27T11:12:33.454538173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 11:12:33 ha-011400 dockerd[1448]: time="2025-01-27T11:12:33.454833276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 11:12:33 ha-011400 dockerd[1448]: time="2025-01-27T11:12:33.500849580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 27 11:12:33 ha-011400 dockerd[1448]: time="2025-01-27T11:12:33.501209184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 27 11:12:33 ha-011400 dockerd[1448]: time="2025-01-27T11:12:33.501337286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 11:12:33 ha-011400 dockerd[1448]: time="2025-01-27T11:12:33.502542999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 11:20:45 ha-011400 dockerd[1448]: time="2025-01-27T11:20:45.727347385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 27 11:20:45 ha-011400 dockerd[1448]: time="2025-01-27T11:20:45.727449685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 27 11:20:45 ha-011400 dockerd[1448]: time="2025-01-27T11:20:45.727571686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 11:20:45 ha-011400 dockerd[1448]: time="2025-01-27T11:20:45.727782687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 11:20:45 ha-011400 cri-dockerd[1342]: time="2025-01-27T11:20:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0051fd728cd4db3fa1d459f6a64f0cf7abc9f0dbeaaee17684f20afab815f6ec/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jan 27 11:20:47 ha-011400 cri-dockerd[1342]: time="2025-01-27T11:20:47Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jan 27 11:20:47 ha-011400 dockerd[1448]: time="2025-01-27T11:20:47.743359503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 27 11:20:47 ha-011400 dockerd[1448]: time="2025-01-27T11:20:47.743524005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 27 11:20:47 ha-011400 dockerd[1448]: time="2025-01-27T11:20:47.743602906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 11:20:47 ha-011400 dockerd[1448]: time="2025-01-27T11:20:47.744276413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e9983636c7dcf       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Running             busybox                   0                   0051fd728cd4d       busybox-58667487b6-68jl6
	bcad71a4f97a9       c69fa2e9cbf5f                                                                                         26 minutes ago      Running             coredns                   0                   3a60197906090       coredns-668d6bf9bc-8b9xh
	f0e3ddbafad83       c69fa2e9cbf5f                                                                                         26 minutes ago      Running             coredns                   0                   1935c738005cb       coredns-668d6bf9bc-228t7
	4b61052edeb8d       6e38f40d628db                                                                                         26 minutes ago      Running             storage-provisioner       0                   eaa6ebb740ba0       storage-provisioner
	2069e52c51e41       kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108              27 minutes ago      Running             kindnet-cni               0                   e24b0a5e38273       kindnet-ll5br
	b57131c4a903e       e29f9c7391fd9                                                                                         27 minutes ago      Running             kube-proxy                0                   0aab982097c2b       kube-proxy-hg72m
	69457ef5aaab5       ghcr.io/kube-vip/kube-vip@sha256:717b8bef2758c10042d64ae7949201ef7f243d928fce265b04e488e844bf9528     27 minutes ago      Running             kube-vip                  0                   9fa384b6dac7d       kube-vip-ha-011400
	dcdc672289089       2b0d6572d062c                                                                                         27 minutes ago      Running             kube-scheduler            0                   2706c9625e77c       kube-scheduler-ha-011400
	198b69006a51b       019ee182b58e2                                                                                         27 minutes ago      Running             kube-controller-manager   0                   cd67bd2b10fab       kube-controller-manager-ha-011400
	3ad7004cc4fef       a9e7e6b294baf                                                                                         27 minutes ago      Running             etcd                      0                   23128693ce80a       etcd-ha-011400
	9bbef2b1e01c4       95c0bda56fc4d                                                                                         27 minutes ago      Running             kube-apiserver            0                   086cfc8d226c5       kube-apiserver-ha-011400
	
	
	==> coredns [bcad71a4f97a] <==
	[INFO] 10.244.0.4:33596 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000255003s
	[INFO] 10.244.0.4:60693 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000289503s
	[INFO] 10.244.2.2:55507 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000394005s
	[INFO] 10.244.2.2:50550 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000063701s
	[INFO] 10.244.2.2:44843 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106402s
	[INFO] 10.244.2.2:51347 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000236902s
	[INFO] 10.244.1.2:35484 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000177102s
	[INFO] 10.244.1.2:51286 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000201402s
	[INFO] 10.244.1.2:49276 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000303403s
	[INFO] 10.244.0.4:32804 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000228302s
	[INFO] 10.244.0.4:45584 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000223003s
	[INFO] 10.244.2.2:47725 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159501s
	[INFO] 10.244.2.2:43405 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000221303s
	[INFO] 10.244.2.2:55745 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000175002s
	[INFO] 10.244.1.2:52662 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000352304s
	[INFO] 10.244.1.2:36689 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000202002s
	[INFO] 10.244.1.2:54398 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000152602s
	[INFO] 10.244.0.4:50709 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000251503s
	[INFO] 10.244.0.4:47310 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000169302s
	[INFO] 10.244.0.4:57342 - 5 "PTR IN 1.192.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000353704s
	[INFO] 10.244.2.2:39323 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163502s
	[INFO] 10.244.2.2:54278 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000253202s
	[INFO] 10.244.2.2:47951 - 5 "PTR IN 1.192.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000176802s
	[INFO] 10.244.1.2:49224 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000312603s
	[INFO] 10.244.1.2:57693 - 5 "PTR IN 1.192.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000393904s
	
	
	==> coredns [f0e3ddbafad8] <==
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:40089 - 18739 "HINFO IN 6468488692358095045.1233646566971498252. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.092172501s
	[INFO] 10.244.0.4:53802 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000272203s
	[INFO] 10.244.2.2:45834 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.002445225s
	[INFO] 10.244.1.2:55758 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.001212313s
	[INFO] 10.244.1.2:49496 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.020711714s
	[INFO] 10.244.1.2:59784 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000179402s
	[INFO] 10.244.2.2:54197 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000307603s
	[INFO] 10.244.2.2:47409 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.109862534s
	[INFO] 10.244.2.2:37295 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000193702s
	[INFO] 10.244.2.2:47677 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000246203s
	[INFO] 10.244.1.2:55326 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184102s
	[INFO] 10.244.1.2:39052 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000132401s
	[INFO] 10.244.1.2:36947 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000229203s
	[INFO] 10.244.1.2:46183 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000222102s
	[INFO] 10.244.1.2:34533 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000178002s
	[INFO] 10.244.0.4:42338 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000207002s
	[INFO] 10.244.0.4:51375 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000184802s
	[INFO] 10.244.2.2:45212 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092601s
	[INFO] 10.244.1.2:42310 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000222702s
	[INFO] 10.244.0.4:33035 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000632306s
	[INFO] 10.244.2.2:37533 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000110201s
	[INFO] 10.244.1.2:37391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144001s
	[INFO] 10.244.1.2:36153 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000088501s
	
	
	==> describe nodes <==
	Name:               ha-011400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-011400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	                    minikube.k8s.io/name=ha-011400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T11_12_09_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 11:12:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-011400
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 11:39:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 11:35:46 +0000   Mon, 27 Jan 2025 11:12:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 11:35:46 +0000   Mon, 27 Jan 2025 11:12:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 11:35:46 +0000   Mon, 27 Jan 2025 11:12:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 11:35:46 +0000   Mon, 27 Jan 2025 11:12:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.192.249
	  Hostname:    ha-011400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 10a11ade80354ac0997dbbc175cad0bf
	  System UUID:                d8404609-e752-314c-b066-45b46de87e79
	  Boot ID:                    4eb876da-53a3-40a3-9774-960843ee30d1
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-68jl6             0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 coredns-668d6bf9bc-228t7             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     27m
	  kube-system                 coredns-668d6bf9bc-8b9xh             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     27m
	  kube-system                 etcd-ha-011400                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         27m
	  kube-system                 kindnet-ll5br                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      27m
	  kube-system                 kube-apiserver-ha-011400             250m (12%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-controller-manager-ha-011400    200m (10%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-proxy-hg72m                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-scheduler-ha-011400             100m (5%)     0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-vip-ha-011400                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27m                kube-proxy       
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)  kubelet          Node ha-011400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)  kubelet          Node ha-011400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)  kubelet          Node ha-011400 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m                kubelet          Node ha-011400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                kubelet          Node ha-011400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                kubelet          Node ha-011400 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27m                node-controller  Node ha-011400 event: Registered Node ha-011400 in Controller
	  Normal  NodeReady                26m                kubelet          Node ha-011400 status is now: NodeReady
	  Normal  RegisteredNode           23m                node-controller  Node ha-011400 event: Registered Node ha-011400 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-011400 event: Registered Node ha-011400 in Controller
	
	
	Name:               ha-011400-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-011400-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	                    minikube.k8s.io/name=ha-011400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_01_27T11_15_46_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 11:15:41 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-011400-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 11:37:49 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 27 Jan 2025 11:33:00 +0000   Mon, 27 Jan 2025 11:38:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 27 Jan 2025 11:33:00 +0000   Mon, 27 Jan 2025 11:38:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 27 Jan 2025 11:33:00 +0000   Mon, 27 Jan 2025 11:38:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 27 Jan 2025 11:33:00 +0000   Mon, 27 Jan 2025 11:38:41 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.29.195.173
	  Hostname:    ha-011400-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 a3b338ed17924366a5216d6a6ca57440
	  System UUID:                60f4b9d5-23b7-b341-9c42-534bfb963bdf
	  Boot ID:                    abec3bf6-3ccd-47b8-9b7b-3086f6341ae4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-qwccg                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 etcd-ha-011400-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         23m
	  kube-system                 kindnet-fs97j                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	  kube-system                 kube-apiserver-ha-011400-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-controller-manager-ha-011400-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-proxy-x52km                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-scheduler-ha-011400-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-vip-ha-011400-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node ha-011400-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node ha-011400-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node ha-011400-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           23m                node-controller  Node ha-011400-m02 event: Registered Node ha-011400-m02 in Controller
	  Normal  RegisteredNode           23m                node-controller  Node ha-011400-m02 event: Registered Node ha-011400-m02 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-011400-m02 event: Registered Node ha-011400-m02 in Controller
	  Normal  NodeNotReady             46s                node-controller  Node ha-011400-m02 status is now: NodeNotReady
	
	
	Name:               ha-011400-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-011400-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	                    minikube.k8s.io/name=ha-011400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_01_27T11_19_38_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 11:19:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-011400-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 11:39:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 11:37:14 +0000   Mon, 27 Jan 2025 11:19:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 11:37:14 +0000   Mon, 27 Jan 2025 11:19:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 11:37:14 +0000   Mon, 27 Jan 2025 11:19:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 11:37:14 +0000   Mon, 27 Jan 2025 11:20:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.196.110
	  Hostname:    ha-011400-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 5a5a62e2138b409497c96ddb03eff3c7
	  System UUID:                8cc9855d-0e69-ae4d-8590-9ad632ae48d3
	  Boot ID:                    6a0c2e9e-54a4-4459-8824-fa7071960394
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-fzbr5                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 etcd-ha-011400-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         19m
	  kube-system                 kindnet-mg445                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      19m
	  kube-system                 kube-apiserver-ha-011400-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-011400-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-4pjv8                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-011400-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-011400-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node ha-011400-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node ha-011400-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node ha-011400-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node ha-011400-m03 event: Registered Node ha-011400-m03 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-011400-m03 event: Registered Node ha-011400-m03 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-011400-m03 event: Registered Node ha-011400-m03 in Controller
	
	
	Name:               ha-011400-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-011400-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	                    minikube.k8s.io/name=ha-011400
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_01_27T11_25_01_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 11:25:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-011400-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 11:39:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 11:35:22 +0000   Mon, 27 Jan 2025 11:25:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 11:35:22 +0000   Mon, 27 Jan 2025 11:25:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 11:35:22 +0000   Mon, 27 Jan 2025 11:25:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 11:35:22 +0000   Mon, 27 Jan 2025 11:25:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.200.81
	  Hostname:    ha-011400-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 fbed3f53580c48d08759a0824a4f7477
	  System UUID:                bc9d568b-f1ec-7345-bb66-88e01c98a98e
	  Boot ID:                    73c4f0eb-7ea5-4bc4-bd47-9b52cfec80f3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-dwx64       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-proxy-7vfrx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x2 over 14m)  kubelet          Node ha-011400-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x2 over 14m)  kubelet          Node ha-011400-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x2 over 14m)  kubelet          Node ha-011400-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                node-controller  Node ha-011400-m04 event: Registered Node ha-011400-m04 in Controller
	  Normal  RegisteredNode           14m                node-controller  Node ha-011400-m04 event: Registered Node ha-011400-m04 in Controller
	  Normal  RegisteredNode           14m                node-controller  Node ha-011400-m04 event: Registered Node ha-011400-m04 in Controller
	  Normal  NodeReady                13m                kubelet          Node ha-011400-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +1.823300] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +6.645538] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan27 11:11] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.182918] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[ +30.184632] systemd-fstab-generator[1006]: Ignoring "noauto" option for root device
	[  +0.107311] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.520304] systemd-fstab-generator[1045]: Ignoring "noauto" option for root device
	[  +0.190737] systemd-fstab-generator[1057]: Ignoring "noauto" option for root device
	[  +0.239308] systemd-fstab-generator[1071]: Ignoring "noauto" option for root device
	[  +2.882619] systemd-fstab-generator[1295]: Ignoring "noauto" option for root device
	[  +0.198053] systemd-fstab-generator[1307]: Ignoring "noauto" option for root device
	[  +0.208312] systemd-fstab-generator[1319]: Ignoring "noauto" option for root device
	[  +0.248120] systemd-fstab-generator[1334]: Ignoring "noauto" option for root device
	[ +11.085414] systemd-fstab-generator[1434]: Ignoring "noauto" option for root device
	[  +0.106636] kauditd_printk_skb: 206 callbacks suppressed
	[  +3.750577] systemd-fstab-generator[1702]: Ignoring "noauto" option for root device
	[  +6.277074] systemd-fstab-generator[1849]: Ignoring "noauto" option for root device
	[  +0.102836] kauditd_printk_skb: 74 callbacks suppressed
	[Jan27 11:12] kauditd_printk_skb: 67 callbacks suppressed
	[  +2.351146] systemd-fstab-generator[2367]: Ignoring "noauto" option for root device
	[  +4.969124] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.869016] kauditd_printk_skb: 29 callbacks suppressed
	[Jan27 11:15] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [3ad7004cc4fe] <==
	{"level":"warn","ts":"2025-01-27T11:39:27.437342Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dc6b987122e5b030","from":"dc6b987122e5b030","remote-peer-id":"81ae44a27a2ca5d4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-01-27T11:39:27.446232Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dc6b987122e5b030","from":"dc6b987122e5b030","remote-peer-id":"81ae44a27a2ca5d4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-01-27T11:39:27.454820Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dc6b987122e5b030","from":"dc6b987122e5b030","remote-peer-id":"81ae44a27a2ca5d4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-01-27T11:39:27.460600Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dc6b987122e5b030","from":"dc6b987122e5b030","remote-peer-id":"81ae44a27a2ca5d4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-01-27T11:39:27.461702Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"81ae44a27a2ca5d4","rtt":"1.822906ms","error":"dial tcp 172.29.195.173:2380: connect: no route to host"}
	{"level":"warn","ts":"2025-01-27T11:39:27.461899Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"81ae44a27a2ca5d4","rtt":"11.037431ms","error":"dial tcp 172.29.195.173:2380: connect: no route to host"}
	{"level":"warn","ts":"2025-01-27T11:39:27.465921Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dc6b987122e5b030","from":"dc6b987122e5b030","remote-peer-id":"81ae44a27a2ca5d4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-01-27T11:39:27.477233Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dc6b987122e5b030","from":"dc6b987122e5b030","remote-peer-id":"81ae44a27a2ca5d4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-01-27T11:39:27.485295Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dc6b987122e5b030","from":"dc6b987122e5b030","remote-peer-id":"81ae44a27a2ca5d4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-01-27T11:39:27.493624Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dc6b987122e5b030","from":"dc6b987122e5b030","remote-peer-id":"81ae44a27a2ca5d4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-01-27T11:39:27.501235Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dc6b987122e5b030","from":"dc6b987122e5b030","remote-peer-id":"81ae44a27a2ca5d4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-01-27T11:39:27.508738Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dc6b987122e5b030","from":"dc6b987122e5b030","remote-peer-id":"81ae44a27a2ca5d4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-01-27T11:39:27.514703Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dc6b987122e5b030","from":"dc6b987122e5b030","remote-peer-id":"81ae44a27a2ca5d4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-01-27T11:39:27.517190Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dc6b987122e5b030","from":"dc6b987122e5b030","remote-peer-id":"81ae44a27a2ca5d4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-01-27T11:39:27.519957Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dc6b987122e5b030","from":"dc6b987122e5b030","remote-peer-id":"81ae44a27a2ca5d4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-01-27T11:39:27.523667Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dc6b987122e5b030","from":"dc6b987122e5b030","remote-peer-id":"81ae44a27a2ca5d4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-01-27T11:39:27.534269Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dc6b987122e5b030","from":"dc6b987122e5b030","remote-peer-id":"81ae44a27a2ca5d4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-01-27T11:39:27.541822Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dc6b987122e5b030","from":"dc6b987122e5b030","remote-peer-id":"81ae44a27a2ca5d4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-01-27T11:39:27.546729Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dc6b987122e5b030","from":"dc6b987122e5b030","remote-peer-id":"81ae44a27a2ca5d4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-01-27T11:39:27.550440Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dc6b987122e5b030","from":"dc6b987122e5b030","remote-peer-id":"81ae44a27a2ca5d4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-01-27T11:39:27.562789Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dc6b987122e5b030","from":"dc6b987122e5b030","remote-peer-id":"81ae44a27a2ca5d4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-01-27T11:39:27.571619Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dc6b987122e5b030","from":"dc6b987122e5b030","remote-peer-id":"81ae44a27a2ca5d4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-01-27T11:39:27.618763Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dc6b987122e5b030","from":"dc6b987122e5b030","remote-peer-id":"81ae44a27a2ca5d4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-01-27T11:39:27.640414Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dc6b987122e5b030","from":"dc6b987122e5b030","remote-peer-id":"81ae44a27a2ca5d4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-01-27T11:39:27.652322Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"dc6b987122e5b030","from":"dc6b987122e5b030","remote-peer-id":"81ae44a27a2ca5d4","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 11:39:27 up 29 min,  0 users,  load average: 0.37, 0.51, 0.44
	Linux ha-011400 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2069e52c51e4] <==
	I0127 11:38:50.823910       1 main.go:324] Node ha-011400-m02 has CIDR [10.244.1.0/24] 
	I0127 11:39:00.823652       1 main.go:297] Handling node with IPs: map[172.29.195.173:{}]
	I0127 11:39:00.823689       1 main.go:324] Node ha-011400-m02 has CIDR [10.244.1.0/24] 
	I0127 11:39:00.824167       1 main.go:297] Handling node with IPs: map[172.29.196.110:{}]
	I0127 11:39:00.824247       1 main.go:324] Node ha-011400-m03 has CIDR [10.244.2.0/24] 
	I0127 11:39:00.824364       1 main.go:297] Handling node with IPs: map[172.29.200.81:{}]
	I0127 11:39:00.824395       1 main.go:324] Node ha-011400-m04 has CIDR [10.244.3.0/24] 
	I0127 11:39:00.824614       1 main.go:297] Handling node with IPs: map[172.29.192.249:{}]
	I0127 11:39:00.824727       1 main.go:301] handling current node
	I0127 11:39:10.832136       1 main.go:297] Handling node with IPs: map[172.29.195.173:{}]
	I0127 11:39:10.832189       1 main.go:324] Node ha-011400-m02 has CIDR [10.244.1.0/24] 
	I0127 11:39:10.833166       1 main.go:297] Handling node with IPs: map[172.29.196.110:{}]
	I0127 11:39:10.833336       1 main.go:324] Node ha-011400-m03 has CIDR [10.244.2.0/24] 
	I0127 11:39:10.833728       1 main.go:297] Handling node with IPs: map[172.29.200.81:{}]
	I0127 11:39:10.833841       1 main.go:324] Node ha-011400-m04 has CIDR [10.244.3.0/24] 
	I0127 11:39:10.834319       1 main.go:297] Handling node with IPs: map[172.29.192.249:{}]
	I0127 11:39:10.834513       1 main.go:301] handling current node
	I0127 11:39:20.822887       1 main.go:297] Handling node with IPs: map[172.29.195.173:{}]
	I0127 11:39:20.822984       1 main.go:324] Node ha-011400-m02 has CIDR [10.244.1.0/24] 
	I0127 11:39:20.823558       1 main.go:297] Handling node with IPs: map[172.29.196.110:{}]
	I0127 11:39:20.823589       1 main.go:324] Node ha-011400-m03 has CIDR [10.244.2.0/24] 
	I0127 11:39:20.826880       1 main.go:297] Handling node with IPs: map[172.29.200.81:{}]
	I0127 11:39:20.826915       1 main.go:324] Node ha-011400-m04 has CIDR [10.244.3.0/24] 
	I0127 11:39:20.827147       1 main.go:297] Handling node with IPs: map[172.29.192.249:{}]
	I0127 11:39:20.827182       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9bbef2b1e01c] <==
	I0127 11:12:08.087963       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0127 11:12:08.110529       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0127 11:12:11.736262       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0127 11:12:11.814228       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0127 11:19:32.469733       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0127 11:19:32.469782       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 173.901µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0127 11:19:32.471630       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0127 11:19:32.473520       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0127 11:19:32.475652       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="43.958334ms" method="PATCH" path="/api/v1/namespaces/default/events/ha-011400-m03.181e88aa5f0c48a1" result=null
	E0127 11:20:52.520955       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50918: use of closed network connection
	E0127 11:20:53.037825       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50920: use of closed network connection
	E0127 11:20:53.584570       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50922: use of closed network connection
	E0127 11:20:54.162206       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50924: use of closed network connection
	E0127 11:20:54.784350       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50926: use of closed network connection
	E0127 11:20:55.323931       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50928: use of closed network connection
	E0127 11:20:55.829844       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50930: use of closed network connection
	E0127 11:20:56.383232       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50932: use of closed network connection
	E0127 11:20:56.904034       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50934: use of closed network connection
	E0127 11:20:57.853762       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50937: use of closed network connection
	E0127 11:21:08.369828       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50939: use of closed network connection
	E0127 11:21:08.887545       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50942: use of closed network connection
	E0127 11:21:19.384206       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50944: use of closed network connection
	E0127 11:21:19.894124       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50949: use of closed network connection
	E0127 11:21:30.403920       1 conn.go:339] Error on socket receive: read tcp 172.29.207.254:8443->172.29.192.1:50951: use of closed network connection
	W0127 11:38:06.042845       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.29.192.249 172.29.196.110]
	
	
	==> kube-controller-manager [198b69006a51] <==
	I0127 11:25:03.056379       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400-m04"
	I0127 11:25:03.106586       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400-m04"
	I0127 11:25:10.635120       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400-m04"
	I0127 11:25:29.091879       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400-m04"
	I0127 11:25:29.099567       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-011400-m04"
	I0127 11:25:29.123843       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400-m04"
	I0127 11:25:30.982683       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400-m04"
	I0127 11:25:31.362689       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400-m04"
	I0127 11:25:32.549046       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400"
	I0127 11:27:01.165936       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400-m03"
	I0127 11:27:55.787949       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400-m02"
	I0127 11:30:17.297905       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400-m04"
	I0127 11:30:39.404668       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400"
	I0127 11:32:08.501240       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400-m03"
	I0127 11:33:00.556782       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400-m02"
	I0127 11:35:22.177249       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400-m04"
	I0127 11:35:46.654987       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400"
	I0127 11:37:14.746096       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400-m03"
	I0127 11:38:41.590222       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400-m02"
	I0127 11:38:41.590366       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-011400-m04"
	I0127 11:38:41.640321       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400-m02"
	I0127 11:38:41.836851       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="34.34104ms"
	I0127 11:38:41.837311       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="272.902µs"
	I0127 11:38:43.285406       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400-m02"
	I0127 11:38:46.899125       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-011400-m02"
	
	
	==> kube-proxy [b57131c4a903] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 11:12:13.106799       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 11:12:13.120320       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.192.249"]
	E0127 11:12:13.120586       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 11:12:13.199607       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 11:12:13.199770       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 11:12:13.199818       1 server_linux.go:170] "Using iptables Proxier"
	I0127 11:12:13.205965       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 11:12:13.207246       1 server.go:497] "Version info" version="v1.32.1"
	I0127 11:12:13.207367       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 11:12:13.211546       1 config.go:199] "Starting service config controller"
	I0127 11:12:13.211584       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 11:12:13.211618       1 config.go:105] "Starting endpoint slice config controller"
	I0127 11:12:13.211624       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 11:12:13.212361       1 config.go:329] "Starting node config controller"
	I0127 11:12:13.212392       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 11:12:13.311857       1 shared_informer.go:320] Caches are synced for service config
	I0127 11:12:13.311982       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 11:12:13.313159       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [dcdc67228908] <==
	I0127 11:12:06.920399       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0127 11:19:31.837856       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-4pjv8\": pod kube-proxy-4pjv8 is already assigned to node \"ha-011400-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-4pjv8" node="ha-011400-m03"
	E0127 11:19:31.844329       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod c0b28c82-50ac-4021-949d-75883580a018(kube-system/kube-proxy-4pjv8) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-4pjv8"
	E0127 11:19:31.844428       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-4pjv8\": pod kube-proxy-4pjv8 is already assigned to node \"ha-011400-m03\"" pod="kube-system/kube-proxy-4pjv8"
	E0127 11:19:31.837948       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mg445\": pod kindnet-mg445 is already assigned to node \"ha-011400-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-mg445" node="ha-011400-m03"
	E0127 11:19:31.844744       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod 37787d9b-44c4-4e83-8d2c-e67333301fd1(kube-system/kindnet-mg445) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-mg445"
	E0127 11:19:31.844924       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mg445\": pod kindnet-mg445 is already assigned to node \"ha-011400-m03\"" pod="kube-system/kindnet-mg445"
	I0127 11:19:31.845083       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mg445" node="ha-011400-m03"
	I0127 11:19:31.844630       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-4pjv8" node="ha-011400-m03"
	E0127 11:20:44.421575       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-58667487b6-fzbr5\": pod busybox-58667487b6-fzbr5 is already assigned to node \"ha-011400-m03\"" plugin="DefaultBinder" pod="default/busybox-58667487b6-fzbr5" node="ha-011400-m03"
	E0127 11:20:44.422211       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod f0d62b04-6b3f-4c90-8b5a-d5dc2e7b527c(default/busybox-58667487b6-fzbr5) wasn't assumed so cannot be forgotten" pod="default/busybox-58667487b6-fzbr5"
	E0127 11:20:44.422616       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-58667487b6-fzbr5\": pod busybox-58667487b6-fzbr5 is already assigned to node \"ha-011400-m03\"" pod="default/busybox-58667487b6-fzbr5"
	I0127 11:20:44.422706       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-58667487b6-fzbr5" node="ha-011400-m03"
	E0127 11:25:00.636365       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-dwx64\": pod kindnet-dwx64 is already assigned to node \"ha-011400-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-dwx64" node="ha-011400-m04"
	E0127 11:25:00.637757       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod 86b53ae9-961f-4312-a2bb-84bf4bb264b0(kube-system/kindnet-dwx64) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-dwx64"
	E0127 11:25:00.641047       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-dwx64\": pod kindnet-dwx64 is already assigned to node \"ha-011400-m04\"" pod="kube-system/kindnet-dwx64"
	I0127 11:25:00.641916       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-dwx64" node="ha-011400-m04"
	E0127 11:25:00.683640       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-brkqc\": pod kube-proxy-brkqc is already assigned to node \"ha-011400-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-brkqc" node="ha-011400-m04"
	E0127 11:25:00.685568       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-vpfhx\": pod kindnet-vpfhx is already assigned to node \"ha-011400-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-vpfhx" node="ha-011400-m04"
	E0127 11:25:00.686337       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod c70c4790-ba29-4df5-955d-4f156e084511(kube-system/kindnet-vpfhx) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-vpfhx"
	E0127 11:25:00.686525       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-vpfhx\": pod kindnet-vpfhx is already assigned to node \"ha-011400-m04\"" pod="kube-system/kindnet-vpfhx"
	I0127 11:25:00.686827       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-vpfhx" node="ha-011400-m04"
	E0127 11:25:00.684021       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod 3a52b2c0-7d88-462f-bff7-cd64a75635c8(kube-system/kube-proxy-brkqc) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-brkqc"
	E0127 11:25:00.687914       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-brkqc\": pod kube-proxy-brkqc is already assigned to node \"ha-011400-m04\"" pod="kube-system/kube-proxy-brkqc"
	I0127 11:25:00.688499       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-brkqc" node="ha-011400-m04"
	
	
	==> kubelet <==
	Jan 27 11:35:08 ha-011400 kubelet[2374]: E0127 11:35:08.276653    2374 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 11:35:08 ha-011400 kubelet[2374]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 11:35:08 ha-011400 kubelet[2374]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 11:35:08 ha-011400 kubelet[2374]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 11:35:08 ha-011400 kubelet[2374]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 11:36:08 ha-011400 kubelet[2374]: E0127 11:36:08.275154    2374 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 11:36:08 ha-011400 kubelet[2374]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 11:36:08 ha-011400 kubelet[2374]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 11:36:08 ha-011400 kubelet[2374]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 11:36:08 ha-011400 kubelet[2374]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 11:37:08 ha-011400 kubelet[2374]: E0127 11:37:08.276649    2374 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 11:37:08 ha-011400 kubelet[2374]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 11:37:08 ha-011400 kubelet[2374]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 11:37:08 ha-011400 kubelet[2374]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 11:37:08 ha-011400 kubelet[2374]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 11:38:08 ha-011400 kubelet[2374]: E0127 11:38:08.277127    2374 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 11:38:08 ha-011400 kubelet[2374]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 11:38:08 ha-011400 kubelet[2374]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 11:38:08 ha-011400 kubelet[2374]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 11:38:08 ha-011400 kubelet[2374]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 11:39:08 ha-011400 kubelet[2374]: E0127 11:39:08.285588    2374 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 11:39:08 ha-011400 kubelet[2374]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 11:39:08 ha-011400 kubelet[2374]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 11:39:08 ha-011400 kubelet[2374]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 11:39:08 ha-011400 kubelet[2374]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-011400 -n ha-011400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-011400 -n ha-011400: (12.0033202s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-011400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (50.35s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (56.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-659000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-659000 -- exec busybox-58667487b6-2jq9j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-659000 -- exec busybox-58667487b6-2jq9j -- sh -c "ping -c 1 172.29.192.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-659000 -- exec busybox-58667487b6-2jq9j -- sh -c "ping -c 1 172.29.192.1": exit status 1 (10.521164s)

                                                
                                                
-- stdout --
	PING 172.29.192.1 (172.29.192.1): 56 data bytes
	
	--- 172.29.192.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.29.192.1) from pod (busybox-58667487b6-2jq9j): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-659000 -- exec busybox-58667487b6-ktfxc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-659000 -- exec busybox-58667487b6-ktfxc -- sh -c "ping -c 1 172.29.192.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-659000 -- exec busybox-58667487b6-ktfxc -- sh -c "ping -c 1 172.29.192.1": exit status 1 (10.4575653s)

                                                
                                                
-- stdout --
	PING 172.29.192.1 (172.29.192.1): 56 data bytes
	
	--- 172.29.192.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.29.192.1) from pod (busybox-58667487b6-ktfxc): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-659000 -n multinode-659000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-659000 -n multinode-659000: (11.7343195s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 logs -n 25: (8.401061s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-129500 ssh -- ls                    | mount-start-2-129500 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:05 UTC | 27 Jan 25 12:05 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-129500                           | mount-start-1-129500 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:05 UTC | 27 Jan 25 12:05 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-129500 ssh -- ls                    | mount-start-2-129500 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:05 UTC | 27 Jan 25 12:05 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-129500                           | mount-start-2-129500 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:05 UTC | 27 Jan 25 12:06 UTC |
	| start   | -p mount-start-2-129500                           | mount-start-2-129500 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:06 UTC | 27 Jan 25 12:08 UTC |
	| mount   | C:\Users\jenkins.minikube6:/minikube-host         | mount-start-2-129500 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:08 UTC |                     |
	|         | --profile mount-start-2-129500 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-129500 ssh -- ls                    | mount-start-2-129500 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:08 UTC | 27 Jan 25 12:08 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-129500                           | mount-start-2-129500 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:08 UTC | 27 Jan 25 12:08 UTC |
	| delete  | -p mount-start-1-129500                           | mount-start-1-129500 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:08 UTC | 27 Jan 25 12:09 UTC |
	| start   | -p multinode-659000                               | multinode-659000     | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:09 UTC | 27 Jan 25 12:15 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-659000 -- apply -f                   | multinode-659000     | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:16 UTC | 27 Jan 25 12:16 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-659000 -- rollout                    | multinode-659000     | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:16 UTC | 27 Jan 25 12:16 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-659000 -- get pods -o                | multinode-659000     | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:16 UTC | 27 Jan 25 12:16 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-659000 -- get pods -o                | multinode-659000     | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:16 UTC | 27 Jan 25 12:16 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-659000 -- exec                       | multinode-659000     | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:16 UTC | 27 Jan 25 12:16 UTC |
	|         | busybox-58667487b6-2jq9j --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-659000 -- exec                       | multinode-659000     | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:16 UTC | 27 Jan 25 12:16 UTC |
	|         | busybox-58667487b6-ktfxc --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-659000 -- exec                       | multinode-659000     | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:16 UTC | 27 Jan 25 12:16 UTC |
	|         | busybox-58667487b6-2jq9j --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-659000 -- exec                       | multinode-659000     | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:16 UTC | 27 Jan 25 12:16 UTC |
	|         | busybox-58667487b6-ktfxc --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-659000 -- exec                       | multinode-659000     | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:16 UTC | 27 Jan 25 12:16 UTC |
	|         | busybox-58667487b6-2jq9j -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-659000 -- exec                       | multinode-659000     | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:16 UTC | 27 Jan 25 12:16 UTC |
	|         | busybox-58667487b6-ktfxc -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-659000 -- get pods -o                | multinode-659000     | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:16 UTC | 27 Jan 25 12:16 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-659000 -- exec                       | multinode-659000     | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:16 UTC | 27 Jan 25 12:16 UTC |
	|         | busybox-58667487b6-2jq9j                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-659000 -- exec                       | multinode-659000     | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:16 UTC |                     |
	|         | busybox-58667487b6-2jq9j -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.29.192.1                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-659000 -- exec                       | multinode-659000     | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:16 UTC | 27 Jan 25 12:16 UTC |
	|         | busybox-58667487b6-ktfxc                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-659000 -- exec                       | multinode-659000     | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:16 UTC |                     |
	|         | busybox-58667487b6-ktfxc -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.29.192.1                         |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 12:09:01
	Running on machine: minikube6
	Binary: Built with gc go1.23.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 12:09:01.249965    8732 out.go:345] Setting OutFile to fd 1476 ...
	I0127 12:09:01.322296    8732 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:09:01.322296    8732 out.go:358] Setting ErrFile to fd 1032...
	I0127 12:09:01.322296    8732 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:09:01.344674    8732 out.go:352] Setting JSON to false
	I0127 12:09:01.347782    8732 start.go:129] hostinfo: {"hostname":"minikube6","uptime":442724,"bootTime":1737537016,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5371 Build 19045.5371","kernelVersion":"10.0.19045.5371 Build 19045.5371","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0127 12:09:01.347923    8732 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0127 12:09:01.351638    8732 out.go:177] * [multinode-659000] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	I0127 12:09:01.357004    8732 notify.go:220] Checking for updates...
	I0127 12:09:01.357764    8732 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0127 12:09:01.360062    8732 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:09:01.365562    8732 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0127 12:09:01.369759    8732 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 12:09:01.372809    8732 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:09:01.376414    8732 config.go:182] Loaded profile config "ha-011400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:09:01.376414    8732 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:09:06.390739    8732 out.go:177] * Using the hyperv driver based on user configuration
	I0127 12:09:06.394574    8732 start.go:297] selected driver: hyperv
	I0127 12:09:06.394574    8732 start.go:901] validating driver "hyperv" against <nil>
	I0127 12:09:06.394574    8732 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:09:06.440220    8732 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 12:09:06.441614    8732 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:09:06.441847    8732 cni.go:84] Creating CNI manager for ""
	I0127 12:09:06.441904    8732 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0127 12:09:06.441904    8732 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0127 12:09:06.441904    8732 start.go:340] cluster config:
	{Name:multinode-659000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-659000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:09:06.441904    8732 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:09:06.447840    8732 out.go:177] * Starting "multinode-659000" primary control-plane node in "multinode-659000" cluster
	I0127 12:09:06.451830    8732 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0127 12:09:06.452853    8732 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
	I0127 12:09:06.452853    8732 cache.go:56] Caching tarball of preloaded images
	I0127 12:09:06.452853    8732 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 12:09:06.452853    8732 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0127 12:09:06.452853    8732 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\config.json ...
	I0127 12:09:06.453865    8732 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\config.json: {Name:mk191495685c4940ee4fe7f429d0df7f93cf9b61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:09:06.454148    8732 start.go:360] acquireMachinesLock for multinode-659000: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:09:06.455201    8732 start.go:364] duration metric: took 1.0529ms to acquireMachinesLock for "multinode-659000"
	I0127 12:09:06.455366    8732 start.go:93] Provisioning new machine with config: &{Name:multinode-659000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-659000
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 12:09:06.455366    8732 start.go:125] createHost starting for "" (driver="hyperv")
	I0127 12:09:06.458548    8732 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0127 12:09:06.458548    8732 start.go:159] libmachine.API.Create for "multinode-659000" (driver="hyperv")
	I0127 12:09:06.458548    8732 client.go:168] LocalClient.Create starting
	I0127 12:09:06.459552    8732 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0127 12:09:06.459552    8732 main.go:141] libmachine: Decoding PEM data...
	I0127 12:09:06.459552    8732 main.go:141] libmachine: Parsing certificate...
	I0127 12:09:06.459552    8732 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0127 12:09:06.459552    8732 main.go:141] libmachine: Decoding PEM data...
	I0127 12:09:06.459552    8732 main.go:141] libmachine: Parsing certificate...
	I0127 12:09:06.459552    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0127 12:09:08.445362    8732 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0127 12:09:08.445858    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:09:08.445993    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0127 12:09:10.099728    8732 main.go:141] libmachine: [stdout =====>] : False
	
	I0127 12:09:10.099728    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:09:10.099728    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0127 12:09:11.495630    8732 main.go:141] libmachine: [stdout =====>] : True
	
	I0127 12:09:11.496297    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:09:11.496441    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0127 12:09:14.978483    8732 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0127 12:09:14.978597    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:09:14.981405    8732 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 12:09:15.536884    8732 main.go:141] libmachine: Creating SSH key...
	I0127 12:09:15.891352    8732 main.go:141] libmachine: Creating VM...
	I0127 12:09:15.891352    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0127 12:09:18.757303    8732 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0127 12:09:18.757410    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:09:18.757505    8732 main.go:141] libmachine: Using switch "Default Switch"
	I0127 12:09:18.757585    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0127 12:09:20.429326    8732 main.go:141] libmachine: [stdout =====>] : True
	
	I0127 12:09:20.429326    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:09:20.429326    8732 main.go:141] libmachine: Creating VHD
	I0127 12:09:20.430164    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0127 12:09:24.012952    8732 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 6012AE7F-A1D4-45C8-B4A0-B07DD13458F8
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0127 12:09:24.012952    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:09:24.012952    8732 main.go:141] libmachine: Writing magic tar header
	I0127 12:09:24.012952    8732 main.go:141] libmachine: Writing SSH key tar header
	I0127 12:09:24.023764    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0127 12:09:27.075783    8732 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:09:27.075783    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:09:27.076260    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000\disk.vhd' -SizeBytes 20000MB
	I0127 12:09:29.500350    8732 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:09:29.500350    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:09:29.501397    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-659000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0127 12:09:33.003527    8732 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-659000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0127 12:09:33.004223    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:09:33.004223    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-659000 -DynamicMemoryEnabled $false
	I0127 12:09:35.159391    8732 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:09:35.159391    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:09:35.159391    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-659000 -Count 2
	I0127 12:09:37.191865    8732 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:09:37.192874    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:09:37.192874    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-659000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000\boot2docker.iso'
	I0127 12:09:39.622724    8732 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:09:39.622724    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:09:39.623621    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-659000 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000\disk.vhd'
	I0127 12:09:42.138065    8732 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:09:42.139103    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:09:42.139103    8732 main.go:141] libmachine: Starting VM...
	I0127 12:09:42.139131    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-659000
	I0127 12:09:45.157479    8732 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:09:45.157886    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:09:45.157886    8732 main.go:141] libmachine: Waiting for host to start...
	I0127 12:09:45.157944    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:09:47.321799    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:09:47.321958    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:09:47.321958    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:09:49.786250    8732 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:09:49.786250    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:09:50.787204    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:09:52.972335    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:09:52.972335    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:09:52.972492    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:09:55.469944    8732 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:09:55.469944    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:09:56.471228    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:09:58.562348    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:09:58.563053    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:09:58.563162    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:10:01.050880    8732 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:10:01.051757    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:10:02.052324    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:10:04.202307    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:10:04.202307    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:10:04.202307    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:10:06.661329    8732 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:10:06.661329    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:10:07.662799    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:10:09.805632    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:10:09.805681    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:10:09.805759    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:10:12.251887    8732 main.go:141] libmachine: [stdout =====>] : 172.29.204.17
	
	I0127 12:10:12.251887    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:10:12.252753    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:10:14.318940    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:10:14.319206    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:10:14.319206    8732 machine.go:93] provisionDockerMachine start ...
	I0127 12:10:14.319335    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:10:16.370139    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:10:16.370139    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:10:16.370139    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:10:18.744593    8732 main.go:141] libmachine: [stdout =====>] : 172.29.204.17
	
	I0127 12:10:18.744593    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:10:18.751291    8732 main.go:141] libmachine: Using SSH client type: native
	I0127 12:10:18.764262    8732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.204.17 22 <nil> <nil>}
	I0127 12:10:18.765054    8732 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 12:10:18.903167    8732 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 12:10:18.903258    8732 buildroot.go:166] provisioning hostname "multinode-659000"
	I0127 12:10:18.903258    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:10:20.907116    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:10:20.907295    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:10:20.907295    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:10:23.268457    8732 main.go:141] libmachine: [stdout =====>] : 172.29.204.17
	
	I0127 12:10:23.268719    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:10:23.272752    8732 main.go:141] libmachine: Using SSH client type: native
	I0127 12:10:23.273594    8732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.204.17 22 <nil> <nil>}
	I0127 12:10:23.273594    8732 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-659000 && echo "multinode-659000" | sudo tee /etc/hostname
	I0127 12:10:23.418635    8732 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-659000
	
	I0127 12:10:23.418635    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:10:25.432208    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:10:25.432536    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:10:25.432536    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:10:27.852889    8732 main.go:141] libmachine: [stdout =====>] : 172.29.204.17
	
	I0127 12:10:27.852889    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:10:27.858564    8732 main.go:141] libmachine: Using SSH client type: native
	I0127 12:10:27.858764    8732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.204.17 22 <nil> <nil>}
	I0127 12:10:27.858764    8732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-659000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-659000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-659000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:10:28.002398    8732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:10:28.002487    8732 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0127 12:10:28.002556    8732 buildroot.go:174] setting up certificates
	I0127 12:10:28.002598    8732 provision.go:84] configureAuth start
	I0127 12:10:28.002686    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:10:30.136872    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:10:30.136872    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:10:30.137974    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:10:32.575429    8732 main.go:141] libmachine: [stdout =====>] : 172.29.204.17
	
	I0127 12:10:32.575429    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:10:32.575934    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:10:34.624666    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:10:34.624666    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:10:34.625297    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:10:37.026101    8732 main.go:141] libmachine: [stdout =====>] : 172.29.204.17
	
	I0127 12:10:37.026384    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:10:37.026439    8732 provision.go:143] copyHostCerts
	I0127 12:10:37.026439    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0127 12:10:37.026439    8732 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0127 12:10:37.026439    8732 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0127 12:10:37.027342    8732 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0127 12:10:37.028202    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0127 12:10:37.028871    8732 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0127 12:10:37.028943    8732 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0127 12:10:37.028943    8732 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0127 12:10:37.030268    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0127 12:10:37.030527    8732 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0127 12:10:37.030527    8732 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0127 12:10:37.031058    8732 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0127 12:10:37.032016    8732 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-659000 san=[127.0.0.1 172.29.204.17 localhost minikube multinode-659000]
	I0127 12:10:37.177167    8732 provision.go:177] copyRemoteCerts
	I0127 12:10:37.189242    8732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:10:37.189242    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:10:39.202176    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:10:39.202176    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:10:39.202809    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:10:41.615532    8732 main.go:141] libmachine: [stdout =====>] : 172.29.204.17
	
	I0127 12:10:41.615532    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:10:41.616816    8732 sshutil.go:53] new ssh client: &{IP:172.29.204.17 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000\id_rsa Username:docker}
	I0127 12:10:41.715833    8732 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5265435s)
	I0127 12:10:41.715833    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0127 12:10:41.715833    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 12:10:41.759372    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0127 12:10:41.759780    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0127 12:10:41.806320    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0127 12:10:41.806551    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 12:10:41.854508    8732 provision.go:87] duration metric: took 13.8517667s to configureAuth
	I0127 12:10:41.854508    8732 buildroot.go:189] setting minikube options for container-runtime
	I0127 12:10:41.855530    8732 config.go:182] Loaded profile config "multinode-659000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:10:41.855648    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:10:43.879051    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:10:43.879205    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:10:43.879205    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:10:46.322750    8732 main.go:141] libmachine: [stdout =====>] : 172.29.204.17
	
	I0127 12:10:46.322750    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:10:46.327794    8732 main.go:141] libmachine: Using SSH client type: native
	I0127 12:10:46.328753    8732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.204.17 22 <nil> <nil>}
	I0127 12:10:46.328753    8732 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0127 12:10:46.466422    8732 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0127 12:10:46.466575    8732 buildroot.go:70] root file system type: tmpfs
	I0127 12:10:46.466818    8732 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0127 12:10:46.466975    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:10:48.490604    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:10:48.491099    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:10:48.491099    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:10:50.904769    8732 main.go:141] libmachine: [stdout =====>] : 172.29.204.17
	
	I0127 12:10:50.904769    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:10:50.912279    8732 main.go:141] libmachine: Using SSH client type: native
	I0127 12:10:50.912414    8732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.204.17 22 <nil> <nil>}
	I0127 12:10:50.912414    8732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0127 12:10:51.083897    8732 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0127 12:10:51.084014    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:10:53.079001    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:10:53.079916    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:10:53.080016    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:10:55.457262    8732 main.go:141] libmachine: [stdout =====>] : 172.29.204.17
	
	I0127 12:10:55.457262    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:10:55.462826    8732 main.go:141] libmachine: Using SSH client type: native
	I0127 12:10:55.463549    8732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.204.17 22 <nil> <nil>}
	I0127 12:10:55.463549    8732 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0127 12:10:57.606122    8732 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0127 12:10:57.606122    8732 machine.go:96] duration metric: took 43.2864663s to provisionDockerMachine
	I0127 12:10:57.606122    8732 client.go:171] duration metric: took 1m51.1464182s to LocalClient.Create
	I0127 12:10:57.606122    8732 start.go:167] duration metric: took 1m51.1464182s to libmachine.API.Create "multinode-659000"
	I0127 12:10:57.606122    8732 start.go:293] postStartSetup for "multinode-659000" (driver="hyperv")
	I0127 12:10:57.606655    8732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:10:57.616817    8732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:10:57.616817    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:10:59.668604    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:10:59.668604    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:10:59.669241    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:11:02.205899    8732 main.go:141] libmachine: [stdout =====>] : 172.29.204.17
	
	I0127 12:11:02.206178    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:11:02.206658    8732 sshutil.go:53] new ssh client: &{IP:172.29.204.17 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000\id_rsa Username:docker}
	I0127 12:11:02.308905    8732 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6916926s)
	I0127 12:11:02.319112    8732 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:11:02.326105    8732 command_runner.go:130] > NAME=Buildroot
	I0127 12:11:02.326105    8732 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0127 12:11:02.326105    8732 command_runner.go:130] > ID=buildroot
	I0127 12:11:02.326105    8732 command_runner.go:130] > VERSION_ID=2023.02.9
	I0127 12:11:02.326105    8732 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0127 12:11:02.326218    8732 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 12:11:02.326218    8732 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0127 12:11:02.326660    8732 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0127 12:11:02.327725    8732 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> 59562.pem in /etc/ssl/certs
	I0127 12:11:02.327792    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> /etc/ssl/certs/59562.pem
	I0127 12:11:02.338920    8732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:11:02.355326    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem --> /etc/ssl/certs/59562.pem (1708 bytes)
	I0127 12:11:02.393625    8732 start.go:296] duration metric: took 4.7874535s for postStartSetup
	I0127 12:11:02.396514    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:11:04.429870    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:11:04.429870    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:11:04.429870    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:11:06.822328    8732 main.go:141] libmachine: [stdout =====>] : 172.29.204.17
	
	I0127 12:11:06.823597    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:11:06.823597    8732 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\config.json ...
	I0127 12:11:06.827104    8732 start.go:128] duration metric: took 2m0.3704861s to createHost
	I0127 12:11:06.827250    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:11:08.855969    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:11:08.856530    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:11:08.856530    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:11:11.358866    8732 main.go:141] libmachine: [stdout =====>] : 172.29.204.17
	
	I0127 12:11:11.359046    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:11:11.363631    8732 main.go:141] libmachine: Using SSH client type: native
	I0127 12:11:11.364070    8732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.204.17 22 <nil> <nil>}
	I0127 12:11:11.364157    8732 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 12:11:11.497288    8732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737979871.508656494
	
	I0127 12:11:11.497288    8732 fix.go:216] guest clock: 1737979871.508656494
	I0127 12:11:11.497288    8732 fix.go:229] Guest: 2025-01-27 12:11:11.508656494 +0000 UTC Remote: 2025-01-27 12:11:06.8271779 +0000 UTC m=+125.667129501 (delta=4.681478594s)
	I0127 12:11:11.497288    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:11:13.602816    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:11:13.602816    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:11:13.602816    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:11:16.087672    8732 main.go:141] libmachine: [stdout =====>] : 172.29.204.17
	
	I0127 12:11:16.087672    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:11:16.093293    8732 main.go:141] libmachine: Using SSH client type: native
	I0127 12:11:16.093467    8732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.204.17 22 <nil> <nil>}
	I0127 12:11:16.093467    8732 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1737979871
	I0127 12:11:16.241196    8732 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 27 12:11:11 UTC 2025
	
	I0127 12:11:16.241196    8732 fix.go:236] clock set: Mon Jan 27 12:11:11 UTC 2025
	 (err=<nil>)
	I0127 12:11:16.241196    8732 start.go:83] releasing machines lock for "multinode-659000", held for 2m9.7846454s
	I0127 12:11:16.241196    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:11:18.264308    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:11:18.265380    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:11:18.265380    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:11:20.736507    8732 main.go:141] libmachine: [stdout =====>] : 172.29.204.17
	
	I0127 12:11:20.736507    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:11:20.742102    8732 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0127 12:11:20.742199    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:11:20.754787    8732 ssh_runner.go:195] Run: cat /version.json
	I0127 12:11:20.754787    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:11:22.871820    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:11:22.871820    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:11:22.871820    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:11:22.872099    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:11:22.872099    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:11:22.872099    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:11:25.533796    8732 main.go:141] libmachine: [stdout =====>] : 172.29.204.17
	
	I0127 12:11:25.533796    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:11:25.535301    8732 sshutil.go:53] new ssh client: &{IP:172.29.204.17 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000\id_rsa Username:docker}
	I0127 12:11:25.558426    8732 main.go:141] libmachine: [stdout =====>] : 172.29.204.17
	
	I0127 12:11:25.558426    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:11:25.559191    8732 sshutil.go:53] new ssh client: &{IP:172.29.204.17 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000\id_rsa Username:docker}
	I0127 12:11:25.635234    8732 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0127 12:11:25.635716    8732 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8934665s)
	W0127 12:11:25.635812    8732 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0127 12:11:25.653777    8732 command_runner.go:130] > {"iso_version": "v1.35.0", "kicbase_version": "v0.0.45-1736763277-20236", "minikube_version": "v1.35.0", "commit": "3fb24bd87c8c8761e2515e1a9ee13835a389ed68"}
	I0127 12:11:25.654620    8732 ssh_runner.go:235] Completed: cat /version.json: (4.8989391s)
	I0127 12:11:25.666160    8732 ssh_runner.go:195] Run: systemctl --version
	I0127 12:11:25.674988    8732 command_runner.go:130] > systemd 252 (252)
	I0127 12:11:25.675087    8732 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0127 12:11:25.685389    8732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0127 12:11:25.693154    8732 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0127 12:11:25.693937    8732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 12:11:25.705130    8732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:11:25.735223    8732 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0127 12:11:25.735223    8732 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 12:11:25.735223    8732 start.go:495] detecting cgroup driver to use...
	I0127 12:11:25.735223    8732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:11:25.771594    8732 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	W0127 12:11:25.782402    8732 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0127 12:11:25.782402    8732 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0127 12:11:25.785317    8732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 12:11:25.818259    8732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 12:11:25.838094    8732 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 12:11:25.848883    8732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 12:11:25.877613    8732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:11:25.907885    8732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 12:11:25.939598    8732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:11:25.968783    8732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:11:25.995366    8732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 12:11:26.025331    8732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 12:11:26.058855    8732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 12:11:26.094895    8732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:11:26.112565    8732 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:11:26.113889    8732 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:11:26.124697    8732 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 12:11:26.158211    8732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:11:26.185381    8732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:11:26.380064    8732 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 12:11:26.410437    8732 start.go:495] detecting cgroup driver to use...
	I0127 12:11:26.423201    8732 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0127 12:11:26.451497    8732 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0127 12:11:26.451713    8732 command_runner.go:130] > [Unit]
	I0127 12:11:26.451713    8732 command_runner.go:130] > Description=Docker Application Container Engine
	I0127 12:11:26.451791    8732 command_runner.go:130] > Documentation=https://docs.docker.com
	I0127 12:11:26.451791    8732 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0127 12:11:26.451791    8732 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0127 12:11:26.451791    8732 command_runner.go:130] > StartLimitBurst=3
	I0127 12:11:26.451791    8732 command_runner.go:130] > StartLimitIntervalSec=60
	I0127 12:11:26.451791    8732 command_runner.go:130] > [Service]
	I0127 12:11:26.451791    8732 command_runner.go:130] > Type=notify
	I0127 12:11:26.451791    8732 command_runner.go:130] > Restart=on-failure
	I0127 12:11:26.451791    8732 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0127 12:11:26.451791    8732 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0127 12:11:26.451914    8732 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0127 12:11:26.451914    8732 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0127 12:11:26.451914    8732 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0127 12:11:26.451974    8732 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0127 12:11:26.451974    8732 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0127 12:11:26.452028    8732 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0127 12:11:26.452028    8732 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0127 12:11:26.452028    8732 command_runner.go:130] > ExecStart=
	I0127 12:11:26.452084    8732 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0127 12:11:26.452084    8732 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0127 12:11:26.452139    8732 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0127 12:11:26.452139    8732 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0127 12:11:26.452139    8732 command_runner.go:130] > LimitNOFILE=infinity
	I0127 12:11:26.452139    8732 command_runner.go:130] > LimitNPROC=infinity
	I0127 12:11:26.452198    8732 command_runner.go:130] > LimitCORE=infinity
	I0127 12:11:26.452198    8732 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0127 12:11:26.452198    8732 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0127 12:11:26.452198    8732 command_runner.go:130] > TasksMax=infinity
	I0127 12:11:26.452198    8732 command_runner.go:130] > TimeoutStartSec=0
	I0127 12:11:26.452252    8732 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0127 12:11:26.452252    8732 command_runner.go:130] > Delegate=yes
	I0127 12:11:26.452252    8732 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0127 12:11:26.452252    8732 command_runner.go:130] > KillMode=process
	I0127 12:11:26.452252    8732 command_runner.go:130] > [Install]
	I0127 12:11:26.452305    8732 command_runner.go:130] > WantedBy=multi-user.target
	I0127 12:11:26.464039    8732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 12:11:26.498307    8732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 12:11:26.538491    8732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 12:11:26.572800    8732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 12:11:26.605215    8732 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 12:11:26.674588    8732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 12:11:26.695766    8732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:11:26.736753    8732 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0127 12:11:26.747955    8732 ssh_runner.go:195] Run: which cri-dockerd
	I0127 12:11:26.754945    8732 command_runner.go:130] > /usr/bin/cri-dockerd
	I0127 12:11:26.764686    8732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0127 12:11:26.780974    8732 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0127 12:11:26.820341    8732 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0127 12:11:27.004841    8732 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0127 12:11:27.188954    8732 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0127 12:11:27.189152    8732 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0127 12:11:27.237701    8732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:11:27.427153    8732 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0127 12:11:29.968050    8732 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.540775s)
	I0127 12:11:29.978677    8732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0127 12:11:30.015149    8732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0127 12:11:30.046509    8732 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0127 12:11:30.252103    8732 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0127 12:11:30.433420    8732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:11:30.610772    8732 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0127 12:11:30.648644    8732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0127 12:11:30.678105    8732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:11:30.868384    8732 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0127 12:11:30.985397    8732 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0127 12:11:30.995938    8732 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0127 12:11:31.003336    8732 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0127 12:11:31.003511    8732 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0127 12:11:31.003511    8732 command_runner.go:130] > Device: 0,22	Inode: 878         Links: 1
	I0127 12:11:31.003574    8732 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0127 12:11:31.003574    8732 command_runner.go:130] > Access: 2025-01-27 12:11:30.906955352 +0000
	I0127 12:11:31.003574    8732 command_runner.go:130] > Modify: 2025-01-27 12:11:30.906955352 +0000
	I0127 12:11:31.003574    8732 command_runner.go:130] > Change: 2025-01-27 12:11:30.909955367 +0000
	I0127 12:11:31.003574    8732 command_runner.go:130] >  Birth: -
	I0127 12:11:31.003637    8732 start.go:563] Will wait 60s for crictl version
	I0127 12:11:31.013355    8732 ssh_runner.go:195] Run: which crictl
	I0127 12:11:31.018671    8732 command_runner.go:130] > /usr/bin/crictl
	I0127 12:11:31.028541    8732 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:11:31.092108    8732 command_runner.go:130] > Version:  0.1.0
	I0127 12:11:31.092108    8732 command_runner.go:130] > RuntimeName:  docker
	I0127 12:11:31.092108    8732 command_runner.go:130] > RuntimeVersion:  27.4.0
	I0127 12:11:31.092108    8732 command_runner.go:130] > RuntimeApiVersion:  v1
	I0127 12:11:31.092108    8732 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0127 12:11:31.100872    8732 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 12:11:31.136023    8732 command_runner.go:130] > 27.4.0
	I0127 12:11:31.145421    8732 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 12:11:31.177257    8732 command_runner.go:130] > 27.4.0
	I0127 12:11:31.182655    8732 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0127 12:11:31.183216    8732 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0127 12:11:31.188096    8732 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0127 12:11:31.188096    8732 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0127 12:11:31.188096    8732 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0127 12:11:31.188096    8732 ip.go:211] Found interface: {Index:17 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:43:05:a6 Flags:up|broadcast|multicast|running}
	I0127 12:11:31.191080    8732 ip.go:214] interface addr: fe80::8ceb:a58b:811a:7c79/64
	I0127 12:11:31.191080    8732 ip.go:214] interface addr: 172.29.192.1/20
	I0127 12:11:31.201081    8732 ssh_runner.go:195] Run: grep 172.29.192.1	host.minikube.internal$ /etc/hosts
	I0127 12:11:31.207591    8732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:11:31.227025    8732 kubeadm.go:883] updating cluster {Name:multinode-659000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-659000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.204.17 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 12:11:31.227247    8732 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0127 12:11:31.235950    8732 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0127 12:11:31.256245    8732 docker.go:689] Got preloaded images: 
	I0127 12:11:31.256245    8732 docker.go:695] registry.k8s.io/kube-apiserver:v1.32.1 wasn't preloaded
	I0127 12:11:31.266839    8732 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0127 12:11:31.284134    8732 command_runner.go:139] > {"Repositories":{}}
	I0127 12:11:31.295268    8732 ssh_runner.go:195] Run: which lz4
	I0127 12:11:31.300973    8732 command_runner.go:130] > /usr/bin/lz4
	I0127 12:11:31.301078    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0127 12:11:31.312540    8732 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0127 12:11:31.318170    8732 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 12:11:31.319263    8732 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0127 12:11:31.319314    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (349810983 bytes)
	I0127 12:11:33.031749    8732 docker.go:653] duration metric: took 1.7306533s to copy over tarball
	I0127 12:11:33.044477    8732 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0127 12:11:41.462739    8732 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.417973s)
	I0127 12:11:41.462739    8732 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0127 12:11:41.525983    8732 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0127 12:11:41.548174    8732 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.3":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.16-0":"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5":"sha256:a9e7e6b294baf1695fccb862d95
6c5d3ad8510e1e4ca1535f35dc09f247abbfc"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.32.1":"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac":"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.32.1":"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954":"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.32.1":"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5":"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102
161f1ded087897a"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.32.1":"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e":"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.10":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136"}}}
	I0127 12:11:41.548516    8732 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0127 12:11:41.597492    8732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:11:41.794413    8732 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0127 12:11:45.095640    8732 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3006771s)
	I0127 12:11:45.105059    8732 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0127 12:11:45.132835    8732 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.32.1
	I0127 12:11:45.132897    8732 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 12:11:45.132897    8732 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.32.1
	I0127 12:11:45.132897    8732 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.32.1
	I0127 12:11:45.132897    8732 command_runner.go:130] > registry.k8s.io/etcd:3.5.16-0
	I0127 12:11:45.132897    8732 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0127 12:11:45.132897    8732 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0127 12:11:45.132897    8732 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:11:45.132897    8732 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0127 12:11:45.132897    8732 cache_images.go:84] Images are preloaded, skipping loading
	I0127 12:11:45.132897    8732 kubeadm.go:934] updating node { 172.29.204.17 8443 v1.32.1 docker true true} ...
	I0127 12:11:45.132897    8732 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-659000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.204.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:multinode-659000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 12:11:45.141484    8732 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0127 12:11:45.201283    8732 command_runner.go:130] > cgroupfs
	I0127 12:11:45.202658    8732 cni.go:84] Creating CNI manager for ""
	I0127 12:11:45.202658    8732 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0127 12:11:45.202658    8732 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 12:11:45.202658    8732 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.29.204.17 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-659000 NodeName:multinode-659000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.29.204.17"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.29.204.17 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 12:11:45.202658    8732 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.29.204.17
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-659000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.29.204.17"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.29.204.17"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 12:11:45.213901    8732 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 12:11:45.230900    8732 command_runner.go:130] > kubeadm
	I0127 12:11:45.231063    8732 command_runner.go:130] > kubectl
	I0127 12:11:45.231063    8732 command_runner.go:130] > kubelet
	I0127 12:11:45.231063    8732 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 12:11:45.241997    8732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 12:11:45.257056    8732 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0127 12:11:45.285816    8732 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 12:11:45.312054    8732 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I0127 12:11:45.353242    8732 ssh_runner.go:195] Run: grep 172.29.204.17	control-plane.minikube.internal$ /etc/hosts
	I0127 12:11:45.360785    8732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.204.17	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:11:45.392914    8732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:11:45.577488    8732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:11:45.604369    8732 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000 for IP: 172.29.204.17
	I0127 12:11:45.604369    8732 certs.go:194] generating shared ca certs ...
	I0127 12:11:45.604369    8732 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:11:45.605780    8732 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0127 12:11:45.605969    8732 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0127 12:11:45.606515    8732 certs.go:256] generating profile certs ...
	I0127 12:11:45.606762    8732 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\client.key
	I0127 12:11:45.607305    8732 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\client.crt with IP's: []
	I0127 12:11:45.724046    8732 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\client.crt ...
	I0127 12:11:45.724046    8732 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\client.crt: {Name:mk53a528936b20c25824af6e23e6db2adb0bc47f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:11:45.725673    8732 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\client.key ...
	I0127 12:11:45.725673    8732 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\client.key: {Name:mk9c347d99da6f3637a570ba65223a15f7e64315 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:11:45.727683    8732 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.key.6d69a00f
	I0127 12:11:45.728012    8732 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.crt.6d69a00f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.29.204.17]
	I0127 12:11:45.977657    8732 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.crt.6d69a00f ...
	I0127 12:11:45.977657    8732 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.crt.6d69a00f: {Name:mk9dd392011818dc60ddc6529b33ca12066f3688 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:11:45.978947    8732 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.key.6d69a00f ...
	I0127 12:11:45.978947    8732 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.key.6d69a00f: {Name:mk55bcc4e2846090e098ad3cd58e246f9436d6c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:11:45.979863    8732 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.crt.6d69a00f -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.crt
	I0127 12:11:45.994523    8732 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.key.6d69a00f -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.key
	I0127 12:11:45.996668    8732 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\proxy-client.key
	I0127 12:11:45.996807    8732 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\proxy-client.crt with IP's: []
	I0127 12:11:46.112171    8732 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\proxy-client.crt ...
	I0127 12:11:46.112171    8732 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\proxy-client.crt: {Name:mk43dd56a31d5c86539668e182109a266b956025 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:11:46.113396    8732 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\proxy-client.key ...
	I0127 12:11:46.113396    8732 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\proxy-client.key: {Name:mkb51dca3d2c64b9ab5a2a026a4944c034df786a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:11:46.114650    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0127 12:11:46.115200    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0127 12:11:46.115200    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0127 12:11:46.115200    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0127 12:11:46.115200    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0127 12:11:46.115957    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0127 12:11:46.116053    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0127 12:11:46.126979    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0127 12:11:46.128151    8732 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem (1338 bytes)
	W0127 12:11:46.128930    8732 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956_empty.pem, impossibly tiny 0 bytes
	I0127 12:11:46.129057    8732 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0127 12:11:46.129057    8732 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0127 12:11:46.129057    8732 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0127 12:11:46.129802    8732 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0127 12:11:46.130085    8732 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem (1708 bytes)
	I0127 12:11:46.130736    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:11:46.131002    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem -> /usr/share/ca-certificates/5956.pem
	I0127 12:11:46.131115    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> /usr/share/ca-certificates/59562.pem
	I0127 12:11:46.131874    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 12:11:46.175829    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 12:11:46.217028    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 12:11:46.259657    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 12:11:46.300695    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 12:11:46.345308    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0127 12:11:46.386613    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 12:11:46.431100    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 12:11:46.476345    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 12:11:46.523422    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem --> /usr/share/ca-certificates/5956.pem (1338 bytes)
	I0127 12:11:46.580765    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem --> /usr/share/ca-certificates/59562.pem (1708 bytes)
	I0127 12:11:46.623382    8732 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 12:11:46.670770    8732 ssh_runner.go:195] Run: openssl version
	I0127 12:11:46.679027    8732 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0127 12:11:46.689585    8732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59562.pem && ln -fs /usr/share/ca-certificates/59562.pem /etc/ssl/certs/59562.pem"
	I0127 12:11:46.719333    8732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59562.pem
	I0127 12:11:46.726076    8732 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 27 10:52 /usr/share/ca-certificates/59562.pem
	I0127 12:11:46.726076    8732 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:52 /usr/share/ca-certificates/59562.pem
	I0127 12:11:46.737399    8732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59562.pem
	I0127 12:11:46.746418    8732 command_runner.go:130] > 3ec20f2e
	I0127 12:11:46.756680    8732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59562.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 12:11:46.785865    8732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 12:11:46.814860    8732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:11:46.821070    8732 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 27 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:11:46.821070    8732 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:11:46.832290    8732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:11:46.840797    8732 command_runner.go:130] > b5213941
	I0127 12:11:46.851946    8732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 12:11:46.880850    8732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5956.pem && ln -fs /usr/share/ca-certificates/5956.pem /etc/ssl/certs/5956.pem"
	I0127 12:11:46.909971    8732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5956.pem
	I0127 12:11:46.916133    8732 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 27 10:52 /usr/share/ca-certificates/5956.pem
	I0127 12:11:46.916416    8732 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:52 /usr/share/ca-certificates/5956.pem
	I0127 12:11:46.925693    8732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5956.pem
	I0127 12:11:46.934787    8732 command_runner.go:130] > 51391683
	I0127 12:11:46.946901    8732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5956.pem /etc/ssl/certs/51391683.0"
	I0127 12:11:46.976058    8732 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:11:46.982616    8732 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 12:11:46.983192    8732 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 12:11:46.983192    8732 kubeadm.go:392] StartCluster: {Name:multinode-659000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-659000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.204.17 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:11:46.990618    8732 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0127 12:11:47.028490    8732 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 12:11:47.054398    8732 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0127 12:11:47.054398    8732 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0127 12:11:47.054398    8732 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0127 12:11:47.067106    8732 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:11:47.097422    8732 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:11:47.112049    8732 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0127 12:11:47.112962    8732 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0127 12:11:47.112962    8732 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0127 12:11:47.113001    8732 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:11:47.113357    8732 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:11:47.113357    8732 kubeadm.go:157] found existing configuration files:
	
	I0127 12:11:47.124085    8732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:11:47.140875    8732 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:11:47.141636    8732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:11:47.153086    8732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:11:47.180070    8732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:11:47.196642    8732 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:11:47.197292    8732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:11:47.208336    8732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:11:47.235876    8732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:11:47.258239    8732 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:11:47.258239    8732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:11:47.269235    8732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:11:47.293121    8732 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:11:47.305429    8732 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:11:47.305515    8732 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:11:47.314536    8732 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:11:47.328765    8732 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0127 12:11:47.640757    8732 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 12:11:47.640757    8732 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 12:11:59.707522    8732 kubeadm.go:310] [init] Using Kubernetes version: v1.32.1
	I0127 12:11:59.707522    8732 command_runner.go:130] > [init] Using Kubernetes version: v1.32.1
	I0127 12:11:59.707689    8732 kubeadm.go:310] [preflight] Running pre-flight checks
	I0127 12:11:59.707593    8732 command_runner.go:130] > [preflight] Running pre-flight checks
	I0127 12:11:59.707915    8732 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 12:11:59.707915    8732 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0127 12:11:59.707915    8732 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 12:11:59.707915    8732 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0127 12:11:59.708448    8732 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 12:11:59.708448    8732 command_runner.go:130] > [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0127 12:11:59.708629    8732 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:11:59.708629    8732 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:11:59.713143    8732 out.go:235]   - Generating certificates and keys ...
	I0127 12:11:59.713785    8732 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0127 12:11:59.713785    8732 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0127 12:11:59.713785    8732 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0127 12:11:59.713785    8732 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0127 12:11:59.713785    8732 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 12:11:59.713785    8732 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0127 12:11:59.714321    8732 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0127 12:11:59.714321    8732 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0127 12:11:59.714538    8732 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0127 12:11:59.714538    8732 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0127 12:11:59.714538    8732 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0127 12:11:59.714538    8732 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0127 12:11:59.714538    8732 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0127 12:11:59.714538    8732 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0127 12:11:59.715244    8732 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-659000] and IPs [172.29.204.17 127.0.0.1 ::1]
	I0127 12:11:59.715244    8732 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-659000] and IPs [172.29.204.17 127.0.0.1 ::1]
	I0127 12:11:59.715244    8732 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0127 12:11:59.715244    8732 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0127 12:11:59.715244    8732 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-659000] and IPs [172.29.204.17 127.0.0.1 ::1]
	I0127 12:11:59.715244    8732 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-659000] and IPs [172.29.204.17 127.0.0.1 ::1]
	I0127 12:11:59.715244    8732 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 12:11:59.715244    8732 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0127 12:11:59.715244    8732 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 12:11:59.715244    8732 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0127 12:11:59.715244    8732 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0127 12:11:59.715244    8732 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0127 12:11:59.715244    8732 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:11:59.715244    8732 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:11:59.716262    8732 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:11:59.716262    8732 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:11:59.716262    8732 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 12:11:59.716436    8732 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 12:11:59.716650    8732 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:11:59.716679    8732 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:11:59.716751    8732 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:11:59.716751    8732 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:11:59.716751    8732 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:11:59.717008    8732 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:11:59.717150    8732 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:11:59.717150    8732 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:11:59.717150    8732 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:11:59.717150    8732 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:11:59.721120    8732 out.go:235]   - Booting up control plane ...
	I0127 12:11:59.722147    8732 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:11:59.722147    8732 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:11:59.722147    8732 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:11:59.722147    8732 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:11:59.722147    8732 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:11:59.722147    8732 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:11:59.722147    8732 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:11:59.722147    8732 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:11:59.722933    8732 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:11:59.722933    8732 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:11:59.723118    8732 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0127 12:11:59.723118    8732 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0127 12:11:59.723118    8732 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 12:11:59.723118    8732 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0127 12:11:59.723756    8732 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 12:11:59.723756    8732 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 12:11:59.723756    8732 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001518735s
	I0127 12:11:59.723756    8732 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001518735s
	I0127 12:11:59.723756    8732 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 12:11:59.723756    8732 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0127 12:11:59.723756    8732 kubeadm.go:310] [api-check] The API server is healthy after 6.503082081s
	I0127 12:11:59.723756    8732 command_runner.go:130] > [api-check] The API server is healthy after 6.503082081s
	I0127 12:11:59.724415    8732 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 12:11:59.724415    8732 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0127 12:11:59.724415    8732 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 12:11:59.724415    8732 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0127 12:11:59.724415    8732 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0127 12:11:59.724415    8732 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0127 12:11:59.725173    8732 command_runner.go:130] > [mark-control-plane] Marking the node multinode-659000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 12:11:59.725173    8732 kubeadm.go:310] [mark-control-plane] Marking the node multinode-659000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0127 12:11:59.725458    8732 command_runner.go:130] > [bootstrap-token] Using token: j363cb.jd15s1r6zmlhe276
	I0127 12:11:59.725458    8732 kubeadm.go:310] [bootstrap-token] Using token: j363cb.jd15s1r6zmlhe276
	I0127 12:11:59.728119    8732 out.go:235]   - Configuring RBAC rules ...
	I0127 12:11:59.728440    8732 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 12:11:59.728463    8732 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0127 12:11:59.728607    8732 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 12:11:59.728667    8732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0127 12:11:59.729167    8732 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 12:11:59.729167    8732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0127 12:11:59.729434    8732 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 12:11:59.729434    8732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0127 12:11:59.729740    8732 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 12:11:59.729804    8732 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0127 12:11:59.729804    8732 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 12:11:59.729804    8732 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0127 12:11:59.729804    8732 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 12:11:59.729804    8732 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0127 12:11:59.730444    8732 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0127 12:11:59.730444    8732 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0127 12:11:59.730444    8732 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0127 12:11:59.730444    8732 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0127 12:11:59.730444    8732 kubeadm.go:310] 
	I0127 12:11:59.730444    8732 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0127 12:11:59.730444    8732 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0127 12:11:59.730444    8732 kubeadm.go:310] 
	I0127 12:11:59.730444    8732 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0127 12:11:59.730444    8732 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0127 12:11:59.730444    8732 kubeadm.go:310] 
	I0127 12:11:59.730444    8732 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0127 12:11:59.730444    8732 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0127 12:11:59.730444    8732 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 12:11:59.730444    8732 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0127 12:11:59.731439    8732 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 12:11:59.731439    8732 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0127 12:11:59.731439    8732 kubeadm.go:310] 
	I0127 12:11:59.731439    8732 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0127 12:11:59.731439    8732 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0127 12:11:59.731439    8732 kubeadm.go:310] 
	I0127 12:11:59.731439    8732 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 12:11:59.731439    8732 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0127 12:11:59.731439    8732 kubeadm.go:310] 
	I0127 12:11:59.731439    8732 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0127 12:11:59.731439    8732 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0127 12:11:59.731439    8732 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 12:11:59.731439    8732 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0127 12:11:59.731439    8732 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 12:11:59.731439    8732 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0127 12:11:59.731439    8732 kubeadm.go:310] 
	I0127 12:11:59.732917    8732 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0127 12:11:59.732984    8732 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0127 12:11:59.733145    8732 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0127 12:11:59.733145    8732 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0127 12:11:59.733145    8732 kubeadm.go:310] 
	I0127 12:11:59.733145    8732 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token j363cb.jd15s1r6zmlhe276 \
	I0127 12:11:59.733669    8732 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token j363cb.jd15s1r6zmlhe276 \
	I0127 12:11:59.733784    8732 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:7b53ba02e26824ededdf08178373023b65bda8005ddad46edfe91cb6a3cb8d3f \
	I0127 12:11:59.733784    8732 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7b53ba02e26824ededdf08178373023b65bda8005ddad46edfe91cb6a3cb8d3f \
	I0127 12:11:59.733784    8732 command_runner.go:130] > 	--control-plane 
	I0127 12:11:59.733784    8732 kubeadm.go:310] 	--control-plane 
	I0127 12:11:59.733784    8732 kubeadm.go:310] 
	I0127 12:11:59.733784    8732 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0127 12:11:59.733784    8732 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0127 12:11:59.733784    8732 kubeadm.go:310] 
	I0127 12:11:59.734403    8732 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token j363cb.jd15s1r6zmlhe276 \
	I0127 12:11:59.734403    8732 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token j363cb.jd15s1r6zmlhe276 \
	I0127 12:11:59.734403    8732 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:7b53ba02e26824ededdf08178373023b65bda8005ddad46edfe91cb6a3cb8d3f 
	I0127 12:11:59.734403    8732 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:7b53ba02e26824ededdf08178373023b65bda8005ddad46edfe91cb6a3cb8d3f 
	I0127 12:11:59.734403    8732 cni.go:84] Creating CNI manager for ""
	I0127 12:11:59.734403    8732 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0127 12:11:59.738557    8732 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0127 12:11:59.752591    8732 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0127 12:11:59.761558    8732 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0127 12:11:59.762581    8732 command_runner.go:130] >   Size: 3103192   	Blocks: 6064       IO Block: 4096   regular file
	I0127 12:11:59.762581    8732 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0127 12:11:59.762581    8732 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0127 12:11:59.762581    8732 command_runner.go:130] > Access: 2025-01-27 12:10:10.085607900 +0000
	I0127 12:11:59.762581    8732 command_runner.go:130] > Modify: 2025-01-14 09:03:58.000000000 +0000
	I0127 12:11:59.762581    8732 command_runner.go:130] > Change: 2025-01-27 12:10:01.869000000 +0000
	I0127 12:11:59.762581    8732 command_runner.go:130] >  Birth: -
	I0127 12:11:59.762834    8732 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0127 12:11:59.762834    8732 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0127 12:11:59.811865    8732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0127 12:12:00.594514    8732 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0127 12:12:00.594658    8732 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0127 12:12:00.594658    8732 command_runner.go:130] > serviceaccount/kindnet created
	I0127 12:12:00.594658    8732 command_runner.go:130] > daemonset.apps/kindnet created
	I0127 12:12:00.594760    8732 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 12:12:00.607801    8732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:12:00.610805    8732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-659000 minikube.k8s.io/updated_at=2025_01_27T12_12_00_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650 minikube.k8s.io/name=multinode-659000 minikube.k8s.io/primary=true
	I0127 12:12:00.632157    8732 command_runner.go:130] > -16
	I0127 12:12:00.632271    8732 ops.go:34] apiserver oom_adj: -16
	I0127 12:12:00.787898    8732 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0127 12:12:00.787898    8732 command_runner.go:130] > node/multinode-659000 labeled
	I0127 12:12:00.797888    8732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:12:00.925660    8732 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0127 12:12:01.299880    8732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:12:01.415961    8732 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0127 12:12:01.799808    8732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:12:01.908308    8732 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0127 12:12:02.299266    8732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:12:02.407218    8732 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0127 12:12:02.799250    8732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:12:02.902195    8732 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0127 12:12:03.298485    8732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0127 12:12:03.408442    8732 command_runner.go:130] > NAME      SECRETS   AGE
	I0127 12:12:03.408442    8732 command_runner.go:130] > default   0         0s
	I0127 12:12:03.409453    8732 kubeadm.go:1113] duration metric: took 2.8146s to wait for elevateKubeSystemPrivileges
	I0127 12:12:03.409453    8732 kubeadm.go:394] duration metric: took 16.4260901s to StartCluster
	I0127 12:12:03.409453    8732 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:12:03.409453    8732 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0127 12:12:03.411876    8732 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:12:03.413518    8732 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0127 12:12:03.413518    8732 start.go:235] Will wait 6m0s for node &{Name: IP:172.29.204.17 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 12:12:03.413518    8732 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 12:12:03.413518    8732 addons.go:69] Setting storage-provisioner=true in profile "multinode-659000"
	I0127 12:12:03.413518    8732 addons.go:69] Setting default-storageclass=true in profile "multinode-659000"
	I0127 12:12:03.414052    8732 addons.go:238] Setting addon storage-provisioner=true in "multinode-659000"
	I0127 12:12:03.414158    8732 config.go:182] Loaded profile config "multinode-659000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:12:03.414052    8732 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-659000"
	I0127 12:12:03.414243    8732 host.go:66] Checking if "multinode-659000" exists ...
	I0127 12:12:03.415560    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:12:03.415986    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:12:03.418948    8732 out.go:177] * Verifying Kubernetes components...
	I0127 12:12:03.433951    8732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:12:03.635849    8732 command_runner.go:130] > apiVersion: v1
	I0127 12:12:03.635849    8732 command_runner.go:130] > data:
	I0127 12:12:03.635849    8732 command_runner.go:130] >   Corefile: |
	I0127 12:12:03.635849    8732 command_runner.go:130] >     .:53 {
	I0127 12:12:03.635849    8732 command_runner.go:130] >         errors
	I0127 12:12:03.635849    8732 command_runner.go:130] >         health {
	I0127 12:12:03.635849    8732 command_runner.go:130] >            lameduck 5s
	I0127 12:12:03.635849    8732 command_runner.go:130] >         }
	I0127 12:12:03.635849    8732 command_runner.go:130] >         ready
	I0127 12:12:03.635849    8732 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0127 12:12:03.635849    8732 command_runner.go:130] >            pods insecure
	I0127 12:12:03.635849    8732 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0127 12:12:03.635849    8732 command_runner.go:130] >            ttl 30
	I0127 12:12:03.635849    8732 command_runner.go:130] >         }
	I0127 12:12:03.635849    8732 command_runner.go:130] >         prometheus :9153
	I0127 12:12:03.635849    8732 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0127 12:12:03.635849    8732 command_runner.go:130] >            max_concurrent 1000
	I0127 12:12:03.635849    8732 command_runner.go:130] >         }
	I0127 12:12:03.635849    8732 command_runner.go:130] >         cache 30 {
	I0127 12:12:03.635849    8732 command_runner.go:130] >            disable success cluster.local
	I0127 12:12:03.635849    8732 command_runner.go:130] >            disable denial cluster.local
	I0127 12:12:03.635849    8732 command_runner.go:130] >         }
	I0127 12:12:03.635849    8732 command_runner.go:130] >         loop
	I0127 12:12:03.635849    8732 command_runner.go:130] >         reload
	I0127 12:12:03.635849    8732 command_runner.go:130] >         loadbalance
	I0127 12:12:03.635849    8732 command_runner.go:130] >     }
	I0127 12:12:03.635849    8732 command_runner.go:130] > kind: ConfigMap
	I0127 12:12:03.635849    8732 command_runner.go:130] > metadata:
	I0127 12:12:03.635849    8732 command_runner.go:130] >   creationTimestamp: "2025-01-27T12:11:58Z"
	I0127 12:12:03.635849    8732 command_runner.go:130] >   name: coredns
	I0127 12:12:03.635849    8732 command_runner.go:130] >   namespace: kube-system
	I0127 12:12:03.635849    8732 command_runner.go:130] >   resourceVersion: "267"
	I0127 12:12:03.635849    8732 command_runner.go:130] >   uid: b63939d2-0139-4912-aa5a-fce9a87237db
	I0127 12:12:03.635849    8732 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.29.192.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0127 12:12:03.714041    8732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:12:04.176019    8732 command_runner.go:130] > configmap/coredns replaced
	I0127 12:12:04.176019    8732 start.go:971] {"host.minikube.internal": 172.29.192.1} host record injected into CoreDNS's ConfigMap
	I0127 12:12:04.177908    8732 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0127 12:12:04.177908    8732 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0127 12:12:04.179165    8732 kapi.go:59] client config for multinode-659000: &rest.Config{Host:"https://172.29.204.17:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-659000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-659000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x301e420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 12:12:04.179341    8732 kapi.go:59] client config for multinode-659000: &rest.Config{Host:"https://172.29.204.17:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-659000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-659000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x301e420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 12:12:04.181162    8732 cert_rotation.go:140] Starting client certificate rotation controller
	I0127 12:12:04.181162    8732 node_ready.go:35] waiting up to 6m0s for node "multinode-659000" to be "Ready" ...
	I0127 12:12:04.181809    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:04.181924    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:04.181809    8732 round_trippers.go:463] GET https://172.29.204.17:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0127 12:12:04.181959    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:04.182011    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:04.182058    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:04.182011    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:04.182058    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:04.205361    8732 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0127 12:12:04.205361    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:04.205361    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:04.205361    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:04.205361    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:04 GMT
	I0127 12:12:04.205361    8732 round_trippers.go:580]     Audit-Id: 654518ae-1f22-4394-9802-36496fe41ae4
	I0127 12:12:04.205361    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:04.205361    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:04.206364    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:04.207358    8732 round_trippers.go:574] Response Status: 200 OK in 25 milliseconds
	I0127 12:12:04.207358    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:04.207358    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:04.207358    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:04.207358    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:04.207358    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:04.207358    8732 round_trippers.go:580]     Content-Length: 291
	I0127 12:12:04.207358    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:04 GMT
	I0127 12:12:04.207358    8732 round_trippers.go:580]     Audit-Id: aa6e2b39-c704-486a-9f23-92642485c1ec
	I0127 12:12:04.207358    8732 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d8a16538-361b-4d3a-b849-97470e1a7b14","resourceVersion":"357","creationTimestamp":"2025-01-27T12:11:59Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0127 12:12:04.208362    8732 request.go:1351] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d8a16538-361b-4d3a-b849-97470e1a7b14","resourceVersion":"357","creationTimestamp":"2025-01-27T12:11:59Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0127 12:12:04.208362    8732 round_trippers.go:463] PUT https://172.29.204.17:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0127 12:12:04.208362    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:04.208362    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:04.208362    8732 round_trippers.go:473]     Content-Type: application/json
	I0127 12:12:04.208362    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:04.232754    8732 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0127 12:12:04.232754    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:04.232860    8732 round_trippers.go:580]     Content-Length: 291
	I0127 12:12:04.232860    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:04 GMT
	I0127 12:12:04.232860    8732 round_trippers.go:580]     Audit-Id: 2ad90bc1-a2fa-43f0-a2c2-70b707406189
	I0127 12:12:04.232860    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:04.232860    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:04.232860    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:04.232943    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:04.232943    8732 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d8a16538-361b-4d3a-b849-97470e1a7b14","resourceVersion":"365","creationTimestamp":"2025-01-27T12:11:59Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0127 12:12:04.681361    8732 round_trippers.go:463] GET https://172.29.204.17:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0127 12:12:04.681361    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:04.681361    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:04.681361    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:04.681361    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:04.681361    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:04.681361    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:04.681361    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:04.686882    8732 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:12:04.686882    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:04.686882    8732 round_trippers.go:580]     Audit-Id: a093fe43-f9e4-42ea-9d7f-75ee1c6e4dfd
	I0127 12:12:04.686882    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:04.686882    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:04.686882    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:04.686882    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:04.686882    8732 round_trippers.go:580]     Content-Length: 291
	I0127 12:12:04.686882    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:04 GMT
	I0127 12:12:04.686882    8732 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"d8a16538-361b-4d3a-b849-97470e1a7b14","resourceVersion":"385","creationTimestamp":"2025-01-27T12:11:59Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0127 12:12:04.686882    8732 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-659000" context rescaled to 1 replicas
	I0127 12:12:04.688717    8732 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 12:12:04.688717    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:04.688717    8732 round_trippers.go:580]     Audit-Id: b288ea87-4875-4507-a051-5ee27929fb16
	I0127 12:12:04.688717    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:04.688717    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:04.688717    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:04.688717    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:04.688717    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:04 GMT
	I0127 12:12:04.689133    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:05.181966    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:05.181966    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:05.181966    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:05.181966    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:05.190236    8732 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0127 12:12:05.190368    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:05.190403    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:05.190403    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:05.190403    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:05.190403    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:05 GMT
	I0127 12:12:05.190403    8732 round_trippers.go:580]     Audit-Id: d503108d-ce91-4d02-8601-fbde5e0d0816
	I0127 12:12:05.190403    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:05.190909    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:05.682021    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:05.682021    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:05.682021    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:05.682021    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:05.685024    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:12:05.685024    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:05.685024    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:05.685024    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:05.685024    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:05.685024    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:05 GMT
	I0127 12:12:05.685024    8732 round_trippers.go:580]     Audit-Id: cc99d0af-ad6f-4811-973a-bc5119e0996b
	I0127 12:12:05.685024    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:05.685024    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:05.715026    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:12:05.715026    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:12:05.715687    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:12:05.715687    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:12:05.717442    8732 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0127 12:12:05.717971    8732 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:12:05.717971    8732 kapi.go:59] client config for multinode-659000: &rest.Config{Host:"https://172.29.204.17:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-659000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-659000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x301e420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 12:12:05.719157    8732 addons.go:238] Setting addon default-storageclass=true in "multinode-659000"
	I0127 12:12:05.719389    8732 host.go:66] Checking if "multinode-659000" exists ...
	I0127 12:12:05.720073    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:12:05.720836    8732 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:12:05.720886    8732 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0127 12:12:05.720886    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:12:06.181934    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:06.181934    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:06.181934    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:06.181934    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:06.187907    8732 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:12:06.187953    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:06.187953    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:06.187953    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:06.187953    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:06.188045    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:06 GMT
	I0127 12:12:06.188045    8732 round_trippers.go:580]     Audit-Id: f76e7e60-47a5-4c8b-8c60-48c4762d69de
	I0127 12:12:06.188045    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:06.188594    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:06.189232    8732 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:12:06.682682    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:06.682682    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:06.682794    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:06.682794    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:06.687094    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:12:06.687094    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:06.687214    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:06 GMT
	I0127 12:12:06.687214    8732 round_trippers.go:580]     Audit-Id: ba639d10-17e7-4d9d-aa98-83268052b0de
	I0127 12:12:06.687214    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:06.687214    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:06.687214    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:06.687214    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:06.687431    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:07.181924    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:07.181924    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:07.181924    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:07.181924    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:07.186995    8732 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:12:07.186995    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:07.186995    8732 round_trippers.go:580]     Audit-Id: 5426104b-9fa3-4674-a94b-df1e586fbb6d
	I0127 12:12:07.186995    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:07.186995    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:07.186995    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:07.186995    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:07.186995    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:07 GMT
	I0127 12:12:07.186995    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:07.683106    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:07.683106    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:07.683106    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:07.683106    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:07.688406    8732 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:12:07.688487    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:07.688487    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:07 GMT
	I0127 12:12:07.688487    8732 round_trippers.go:580]     Audit-Id: df1c9e2c-9137-4009-8fd2-21610d0e2712
	I0127 12:12:07.688683    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:07.688683    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:07.688683    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:07.688683    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:07.689147    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:07.971928    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:12:07.971928    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:12:07.972324    8732 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0127 12:12:07.972324    8732 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0127 12:12:07.972324    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:12:07.974157    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:12:07.974157    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:12:07.974320    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:12:08.181660    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:08.181660    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:08.181660    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:08.181660    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:08.187166    8732 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:12:08.187166    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:08.187166    8732 round_trippers.go:580]     Audit-Id: 647c2686-5a35-4697-9a10-b6f1bc1b950c
	I0127 12:12:08.187166    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:08.187166    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:08.187166    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:08.187166    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:08.187166    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:08 GMT
	I0127 12:12:08.187166    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:08.682490    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:08.682490    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:08.682490    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:08.682490    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:08.865894    8732 round_trippers.go:574] Response Status: 200 OK in 183 milliseconds
	I0127 12:12:08.866035    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:08.866119    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:08.866119    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:08 GMT
	I0127 12:12:08.866119    8732 round_trippers.go:580]     Audit-Id: e85ab406-ea59-4a99-8864-2a0eca7d853b
	I0127 12:12:08.866119    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:08.866188    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:08.866188    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:08.866605    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:08.866954    8732 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:12:09.182207    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:09.182207    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:09.182207    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:09.182207    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:09.187081    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:12:09.187165    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:09.187165    8732 round_trippers.go:580]     Audit-Id: 2e7f994f-2cd7-465a-a181-4fd3464beb66
	I0127 12:12:09.187165    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:09.187165    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:09.187165    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:09.187165    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:09.187264    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:09 GMT
	I0127 12:12:09.187760    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:09.681443    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:09.681443    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:09.681443    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:09.681443    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:09.685993    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:12:09.685993    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:09.685993    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:09.685993    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:09.685993    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:09.685993    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:09 GMT
	I0127 12:12:09.685993    8732 round_trippers.go:580]     Audit-Id: f8bb0e31-7233-4630-b35d-c7bbed180131
	I0127 12:12:09.685993    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:09.685993    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:10.181339    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:10.181339    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:10.181339    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:10.181339    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:10.185531    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:12:10.185599    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:10.185599    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:10.185599    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:10.185599    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:10.185599    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:10.185599    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:10 GMT
	I0127 12:12:10.185599    8732 round_trippers.go:580]     Audit-Id: 830a9b80-9123-446c-b7d1-b45f855157bf
	I0127 12:12:10.186016    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:10.277761    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:12:10.278793    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:12:10.278876    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:12:10.681295    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:10.681295    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:10.681295    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:10.681295    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:10.688202    8732 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:12:10.688202    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:10.688202    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:10 GMT
	I0127 12:12:10.688202    8732 round_trippers.go:580]     Audit-Id: db38ef7a-cd79-4e6e-9391-6ce26163c25e
	I0127 12:12:10.688428    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:10.688485    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:10.688485    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:10.688485    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:10.688928    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:10.857029    8732 main.go:141] libmachine: [stdout =====>] : 172.29.204.17
	
	I0127 12:12:10.857107    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:12:10.857918    8732 sshutil.go:53] new ssh client: &{IP:172.29.204.17 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000\id_rsa Username:docker}
	I0127 12:12:11.009026    8732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0127 12:12:11.182027    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:11.182027    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:11.182027    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:11.182027    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:11.186370    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:12:11.186370    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:11.186370    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:11.186370    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:11.186370    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:11 GMT
	I0127 12:12:11.186370    8732 round_trippers.go:580]     Audit-Id: 13b54dfa-94d7-43ea-af66-2399328bdeaa
	I0127 12:12:11.186370    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:11.186370    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:11.186370    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:11.187030    8732 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:12:11.645103    8732 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0127 12:12:11.645188    8732 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0127 12:12:11.645188    8732 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0127 12:12:11.645264    8732 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0127 12:12:11.645264    8732 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0127 12:12:11.645264    8732 command_runner.go:130] > pod/storage-provisioner created
	I0127 12:12:11.681348    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:11.681348    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:11.681348    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:11.681348    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:11.685702    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:12:11.685702    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:11.685702    8732 round_trippers.go:580]     Audit-Id: 91132d0c-e42f-4b2e-bae8-c92ad6b54daa
	I0127 12:12:11.685702    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:11.685702    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:11.685702    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:11.685702    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:11.685702    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:11 GMT
	I0127 12:12:11.685834    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:12.182342    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:12.182446    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:12.182479    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:12.182479    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:12.187456    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:12:12.187546    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:12.187546    8732 round_trippers.go:580]     Audit-Id: c65e6edc-0610-41af-9f97-656a9f73772a
	I0127 12:12:12.187546    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:12.187546    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:12.187546    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:12.187603    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:12.187603    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:12 GMT
	I0127 12:12:12.187984    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:12.681682    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:12.681682    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:12.681682    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:12.681682    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:12.687726    8732 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:12:12.687726    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:12.687726    8732 round_trippers.go:580]     Audit-Id: 674f751f-7e24-418d-932e-3fac807c8f01
	I0127 12:12:12.687726    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:12.687726    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:12.687726    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:12.687726    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:12.687726    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:12 GMT
	I0127 12:12:12.687726    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:12.853249    8732 main.go:141] libmachine: [stdout =====>] : 172.29.204.17
	
	I0127 12:12:12.853249    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:12:12.854244    8732 sshutil.go:53] new ssh client: &{IP:172.29.204.17 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000\id_rsa Username:docker}
	I0127 12:12:12.992524    8732 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0127 12:12:13.160783    8732 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0127 12:12:13.161167    8732 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0127 12:12:13.161223    8732 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0127 12:12:13.161223    8732 round_trippers.go:463] GET https://172.29.204.17:8443/apis/storage.k8s.io/v1/storageclasses
	I0127 12:12:13.161223    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:13.161223    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:13.161223    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:13.165057    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:12:13.165057    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:13.165057    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:13.165057    8732 round_trippers.go:580]     Content-Length: 1273
	I0127 12:12:13.165141    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:13 GMT
	I0127 12:12:13.165141    8732 round_trippers.go:580]     Audit-Id: fbaa1e7e-8d11-4b90-8182-a989d1540c61
	I0127 12:12:13.165141    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:13.165141    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:13.165141    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:13.165141    8732 request.go:1351] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"413"},"items":[{"metadata":{"name":"standard","uid":"4bdf7c25-3322-48f2-b544-94efc46a0d4b","resourceVersion":"413","creationTimestamp":"2025-01-27T12:12:13Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2025-01-27T12:12:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0127 12:12:13.165845    8732 request.go:1351] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"4bdf7c25-3322-48f2-b544-94efc46a0d4b","resourceVersion":"413","creationTimestamp":"2025-01-27T12:12:13Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2025-01-27T12:12:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0127 12:12:13.165919    8732 round_trippers.go:463] PUT https://172.29.204.17:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0127 12:12:13.165919    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:13.165977    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:13.165977    8732 round_trippers.go:473]     Content-Type: application/json
	I0127 12:12:13.165977    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:13.179605    8732 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0127 12:12:13.179717    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:13.179717    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:13.179717    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:13.179717    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:13.179717    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:13.179717    8732 round_trippers.go:580]     Content-Length: 1220
	I0127 12:12:13.179717    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:13 GMT
	I0127 12:12:13.179717    8732 round_trippers.go:580]     Audit-Id: d43607d7-7f07-441d-9e71-21def6833c72
	I0127 12:12:13.179717    8732 request.go:1351] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"4bdf7c25-3322-48f2-b544-94efc46a0d4b","resourceVersion":"413","creationTimestamp":"2025-01-27T12:12:13Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2025-01-27T12:12:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0127 12:12:13.181689    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:13.181689    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:13.181689    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:13.181689    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:13.187968    8732 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:12:13.187968    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:13.187968    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:13.187968    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:13.187968    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:13.187968    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:13.187968    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:13 GMT
	I0127 12:12:13.187968    8732 round_trippers.go:580]     Audit-Id: dabfbb4e-9229-4883-8986-f52d4db2dd07
	I0127 12:12:13.187968    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:13.187968    8732 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:12:13.343812    8732 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0127 12:12:13.355566    8732 addons.go:514] duration metric: took 9.9419442s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0127 12:12:13.681516    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:13.681516    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:13.681516    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:13.681516    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:13.686768    8732 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:12:13.686768    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:13.686768    8732 round_trippers.go:580]     Audit-Id: 867ade28-4646-4ec9-be35-28ae5ccbbd46
	I0127 12:12:13.686768    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:13.686768    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:13.686958    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:13.686958    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:13.686958    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:13 GMT
	I0127 12:12:13.688531    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:14.181300    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:14.181300    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:14.181300    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:14.181300    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:14.185410    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:12:14.185836    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:14.185836    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:14 GMT
	I0127 12:12:14.185836    8732 round_trippers.go:580]     Audit-Id: c10a72bb-bd7b-40e9-af4e-d1d24356e928
	I0127 12:12:14.185836    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:14.185836    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:14.185836    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:14.185939    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:14.186954    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:14.682003    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:14.682003    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:14.682003    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:14.682003    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:14.685515    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:12:14.685515    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:14.685515    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:14.685515    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:14.685515    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:14.686115    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:14.686115    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:14 GMT
	I0127 12:12:14.686115    8732 round_trippers.go:580]     Audit-Id: 5a5525fc-b2a6-4981-ae37-818cadebf37e
	I0127 12:12:14.688971    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:15.182376    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:15.182476    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:15.182476    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:15.182476    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:15.186247    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:12:15.186777    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:15.186777    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:15 GMT
	I0127 12:12:15.186777    8732 round_trippers.go:580]     Audit-Id: 66898297-6901-4540-95aa-1677f6bf9536
	I0127 12:12:15.186777    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:15.186777    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:15.186777    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:15.186777    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:15.187464    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:15.188026    8732 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:12:15.681718    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:15.681718    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:15.681718    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:15.681718    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:15.686204    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:12:15.686607    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:15.686607    8732 round_trippers.go:580]     Audit-Id: 507b1fd8-d404-49eb-b8cf-f6aa483a77e9
	I0127 12:12:15.686607    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:15.686607    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:15.686607    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:15.686607    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:15.686607    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:15 GMT
	I0127 12:12:15.686754    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:16.182762    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:16.182762    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:16.182880    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:16.182880    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:16.191278    8732 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0127 12:12:16.191859    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:16.191859    8732 round_trippers.go:580]     Audit-Id: a5f50302-d1e2-4863-99a7-1109a105ec8b
	I0127 12:12:16.191859    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:16.191859    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:16.191859    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:16.191940    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:16.191940    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:16 GMT
	I0127 12:12:16.194220    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:16.682089    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:16.682089    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:16.682089    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:16.682089    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:16.685833    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:12:16.686857    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:16.686857    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:16.686857    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:16.686857    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:16.686857    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:16 GMT
	I0127 12:12:16.686857    8732 round_trippers.go:580]     Audit-Id: 2e6d02d0-acc0-4826-ac51-f39c33936d48
	I0127 12:12:16.686857    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:16.687278    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:17.181443    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:17.181443    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:17.181443    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:17.181443    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:17.186842    8732 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:12:17.187890    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:17.187890    8732 round_trippers.go:580]     Audit-Id: 0bab5e07-41e9-46bb-b6d5-9088e379659c
	I0127 12:12:17.187949    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:17.187949    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:17.187949    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:17.187949    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:17.187949    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:17 GMT
	I0127 12:12:17.188596    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:17.188836    8732 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:12:17.682390    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:17.682390    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:17.682545    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:17.682545    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:17.686168    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:12:17.686168    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:17.686707    8732 round_trippers.go:580]     Audit-Id: 4a316fec-b459-4680-a75a-a85497cdf2cd
	I0127 12:12:17.686707    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:17.686707    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:17.686707    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:17.686707    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:17.686707    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:17 GMT
	I0127 12:12:17.686884    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:18.182790    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:18.182882    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:18.182882    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:18.182882    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:18.186922    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:12:18.187017    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:18.187092    8732 round_trippers.go:580]     Audit-Id: 23b65623-2193-4161-afe3-d116db1e9b1f
	I0127 12:12:18.187092    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:18.187092    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:18.187092    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:18.187092    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:18.187092    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:18 GMT
	I0127 12:12:18.187517    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:18.682149    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:18.682220    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:18.682220    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:18.682302    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:18.686077    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:12:18.686077    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:18.686077    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:18 GMT
	I0127 12:12:18.686200    8732 round_trippers.go:580]     Audit-Id: e99494a5-a8ad-4c46-8c33-fca2c021655c
	I0127 12:12:18.686200    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:18.686200    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:18.686200    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:18.686200    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:18.686380    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:19.182123    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:19.182185    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:19.182185    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:19.182185    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:19.186418    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:12:19.186418    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:19.186418    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:19.186418    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:19 GMT
	I0127 12:12:19.186418    8732 round_trippers.go:580]     Audit-Id: 6ae02822-b1e2-478f-abf1-43ea9251fb39
	I0127 12:12:19.186418    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:19.186418    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:19.186418    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:19.186418    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:19.681534    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:19.681534    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:19.681534    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:19.681534    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:19.688298    8732 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:12:19.688335    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:19.688335    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:19.688335    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:19.688335    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:19.688335    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:19.688335    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:19 GMT
	I0127 12:12:19.688401    8732 round_trippers.go:580]     Audit-Id: d9133b32-6536-4061-bf8f-3e8704494a9e
	I0127 12:12:19.688639    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:19.689146    8732 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:12:20.181897    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:20.181897    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:20.181897    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:20.181897    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:20.186541    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:12:20.186687    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:20.186687    8732 round_trippers.go:580]     Audit-Id: fd4a31f6-bb11-491b-bf53-f0d00c6aaf9d
	I0127 12:12:20.186687    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:20.186687    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:20.186781    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:20.186781    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:20.186781    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:20 GMT
	I0127 12:12:20.187085    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:20.681834    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:20.681834    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:20.681834    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:20.681834    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:20.687269    8732 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:12:20.687379    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:20.687379    8732 round_trippers.go:580]     Audit-Id: 970111e5-245d-457c-bfcc-b325beeee8d6
	I0127 12:12:20.687379    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:20.687379    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:20.687379    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:20.687379    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:20.687379    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:20 GMT
	I0127 12:12:20.688018    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:21.182379    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:21.182453    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:21.182453    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:21.182453    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:21.187733    8732 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:12:21.187733    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:21.187860    8732 round_trippers.go:580]     Audit-Id: b3b5eec2-3dfe-4268-a9f4-d2c9d5abfa0e
	I0127 12:12:21.187860    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:21.187860    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:21.187860    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:21.187860    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:21.187860    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:21 GMT
	I0127 12:12:21.188156    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:21.681378    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:21.681378    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:21.681378    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:21.681378    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:21.685193    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:12:21.685193    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:21.685306    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:21 GMT
	I0127 12:12:21.685306    8732 round_trippers.go:580]     Audit-Id: 72765be2-200d-4f0c-ac3c-867b56b15151
	I0127 12:12:21.685306    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:21.685306    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:21.685306    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:21.685306    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:21.685536    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:22.182335    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:22.182335    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:22.182335    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:22.182335    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:22.186530    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:12:22.186647    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:22.186647    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:22.186647    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:22.186647    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:22 GMT
	I0127 12:12:22.186647    8732 round_trippers.go:580]     Audit-Id: 0b8f0a0c-25a7-4667-be44-1a2affd6aa42
	I0127 12:12:22.186647    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:22.186647    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:22.186911    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:22.187250    8732 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:12:22.682056    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:22.682147    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:22.682147    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:22.682147    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:22.687717    8732 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:12:22.687792    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:22.687792    8732 round_trippers.go:580]     Audit-Id: c586b72b-6ea6-4550-9938-9a8643d9cbf2
	I0127 12:12:22.687792    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:22.687870    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:22.687870    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:22.687870    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:22.687870    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:22 GMT
	I0127 12:12:22.688249    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:23.182063    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:23.182063    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:23.182142    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:23.182142    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:23.186984    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:12:23.187125    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:23.187125    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:23.187125    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:23.187125    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:23.187125    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:23.187125    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:23 GMT
	I0127 12:12:23.187125    8732 round_trippers.go:580]     Audit-Id: 325cd599-0c1f-4796-8fdf-5e2e28c27185
	I0127 12:12:23.187620    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:23.681678    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:23.681678    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:23.681678    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:23.681678    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:23.686223    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:12:23.686223    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:23.686341    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:23 GMT
	I0127 12:12:23.686341    8732 round_trippers.go:580]     Audit-Id: 20cc3561-4afc-4fce-ae00-e7ee78ff000d
	I0127 12:12:23.686341    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:23.686341    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:23.686341    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:23.686341    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:23.686644    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:24.182103    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:24.182103    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:24.182103    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:24.182103    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:24.191172    8732 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0127 12:12:24.191207    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:24.191207    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:24.191207    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:24.191207    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:24 GMT
	I0127 12:12:24.191207    8732 round_trippers.go:580]     Audit-Id: 9c66de70-f36a-4c00-a17e-e6a117365605
	I0127 12:12:24.191207    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:24.191311    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:24.191781    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:24.192330    8732 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:12:24.682433    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:24.682523    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:24.682523    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:24.682523    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:24.685918    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:12:24.685918    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:24.685918    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:24.685918    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:24.685918    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:24.685918    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:24 GMT
	I0127 12:12:24.685918    8732 round_trippers.go:580]     Audit-Id: ecfda9f5-f54f-4dd1-a68f-d2f00e7145e0
	I0127 12:12:24.685918    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:24.688411    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:25.182684    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:25.182770    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:25.182770    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:25.182770    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:25.190846    8732 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0127 12:12:25.190959    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:25.190959    8732 round_trippers.go:580]     Audit-Id: 95756e71-907c-4414-81a5-663ab5c33039
	I0127 12:12:25.190959    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:25.190959    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:25.190959    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:25.190959    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:25.190959    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:25 GMT
	I0127 12:12:25.192567    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:25.681463    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:25.681463    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:25.681463    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:25.681463    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:25.685592    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:12:25.685592    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:25.685592    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:25.685592    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:25 GMT
	I0127 12:12:25.685700    8732 round_trippers.go:580]     Audit-Id: 6af6cecf-f639-43b8-8c78-e1bfb4bccb57
	I0127 12:12:25.685700    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:25.685700    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:25.685700    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:25.685838    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"336","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0127 12:12:26.181673    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:26.181673    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:26.181673    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:26.181673    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:26.185215    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:12:26.185319    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:26.185319    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:26.185319    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:26 GMT
	I0127 12:12:26.185319    8732 round_trippers.go:580]     Audit-Id: 8c2af489-b75f-4ae3-957c-e6ae4454159f
	I0127 12:12:26.185319    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:26.185319    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:26.185319    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:26.185907    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"423","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0127 12:12:26.186406    8732 node_ready.go:49] node "multinode-659000" has status "Ready":"True"
	I0127 12:12:26.186483    8732 node_ready.go:38] duration metric: took 22.0050148s for node "multinode-659000" to be "Ready" ...
	I0127 12:12:26.186483    8732 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:12:26.186738    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/namespaces/kube-system/pods
	I0127 12:12:26.186782    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:26.186852    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:26.186931    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:26.193215    8732 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:12:26.193215    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:26.193215    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:26.193215    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:26.193215    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:26.193215    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:26.193215    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:26 GMT
	I0127 12:12:26.193215    8732 round_trippers.go:580]     Audit-Id: e270b804-6195-45be-aa9d-64f1edd799ce
	I0127 12:12:26.195825    8732 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"427","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 58177 chars]
	I0127 12:12:26.201776    8732 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace to be "Ready" ...
	I0127 12:12:26.201942    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:12:26.201942    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:26.201942    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:26.201942    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:26.204827    8732 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:12:26.204827    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:26.204827    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:26.204827    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:26.204827    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:26 GMT
	I0127 12:12:26.204827    8732 round_trippers.go:580]     Audit-Id: f5c2c0be-0ca0-4ee9-a380-bd20d20816af
	I0127 12:12:26.204827    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:26.204827    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:26.205409    8732 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"427","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6703 chars]
	I0127 12:12:26.205981    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:26.206132    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:26.206132    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:26.206132    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:26.211873    8732 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:12:26.211873    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:26.211972    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:26.211972    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:26.211972    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:26.211972    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:26 GMT
	I0127 12:12:26.211972    8732 round_trippers.go:580]     Audit-Id: d3dc1909-ee0a-4995-adb9-ed24bbd2d5c7
	I0127 12:12:26.211972    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:26.212194    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"423","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0127 12:12:26.702734    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:12:26.702734    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:26.702734    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:26.702734    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:26.706322    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:12:26.706433    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:26.706433    8732 round_trippers.go:580]     Audit-Id: 5e0526d0-4c7d-41d2-bd5b-cb532a2b9a13
	I0127 12:12:26.706433    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:26.706433    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:26.706433    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:26.706433    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:26.706433    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:26 GMT
	I0127 12:12:26.706828    8732 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"427","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6703 chars]
	I0127 12:12:26.707538    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:26.707538    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:26.707538    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:26.707538    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:26.709110    8732 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0127 12:12:26.709980    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:26.709980    8732 round_trippers.go:580]     Audit-Id: a8362d7a-33f3-4128-9f75-7e2c60ef517c
	I0127 12:12:26.710085    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:26.710085    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:26.710085    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:26.710085    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:26.710085    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:26 GMT
	I0127 12:12:26.710252    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"423","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0127 12:12:27.202691    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:12:27.202691    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:27.202691    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:27.202691    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:27.208972    8732 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:12:27.208972    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:27.208972    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:27.208972    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:27.209058    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:27.209058    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:27 GMT
	I0127 12:12:27.209058    8732 round_trippers.go:580]     Audit-Id: 9559aada-859d-4140-ac2e-901daff57792
	I0127 12:12:27.209058    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:27.209607    8732 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"427","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6703 chars]
	I0127 12:12:27.210219    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:27.210219    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:27.210219    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:27.210219    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:27.213353    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:12:27.213435    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:27.213435    8732 round_trippers.go:580]     Audit-Id: a491fdda-0405-487c-8971-2f9d7b1b5596
	I0127 12:12:27.213435    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:27.213435    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:27.213435    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:27.213435    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:27.213493    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:27 GMT
	I0127 12:12:27.215201    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"423","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0127 12:12:27.703070    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:12:27.703070    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:27.703160    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:27.703160    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:27.706712    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:12:27.706712    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:27.706712    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:27.706712    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:27.706808    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:27.706808    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:27.706808    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:27 GMT
	I0127 12:12:27.706808    8732 round_trippers.go:580]     Audit-Id: 21fbfef6-e0b7-44b7-a054-833dd1b23b93
	I0127 12:12:27.707031    8732 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"427","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6703 chars]
	I0127 12:12:27.707319    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:27.707319    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:27.707319    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:27.707319    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:27.710668    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:12:27.710770    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:27.710770    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:27.710770    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:27 GMT
	I0127 12:12:27.710770    8732 round_trippers.go:580]     Audit-Id: 5e21ae64-e494-46db-aea8-4b84b3e98c55
	I0127 12:12:27.710770    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:27.710846    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:27.710846    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:27.710991    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"423","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0127 12:12:28.202895    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:12:28.202895    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:28.202895    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:28.202895    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:28.207390    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:12:28.207492    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:28.207539    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:28 GMT
	I0127 12:12:28.207539    8732 round_trippers.go:580]     Audit-Id: fd3226eb-986d-43ad-a51e-0ae3e9918840
	I0127 12:12:28.207539    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:28.207539    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:28.207539    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:28.207539    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:28.208478    8732 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"442","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6834 chars]
	I0127 12:12:28.209032    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:28.209032    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:28.209032    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:28.209032    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:28.211841    8732 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:12:28.212132    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:28.212132    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:28 GMT
	I0127 12:12:28.212132    8732 round_trippers.go:580]     Audit-Id: 398e891d-4778-4f18-99a5-6ff531bccb5d
	I0127 12:12:28.212132    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:28.212132    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:28.212132    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:28.212132    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:28.212262    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"423","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0127 12:12:28.212924    8732 pod_ready.go:93] pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace has status "Ready":"True"
	I0127 12:12:28.212924    8732 pod_ready.go:82] duration metric: took 2.0110946s for pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace to be "Ready" ...
	I0127 12:12:28.212924    8732 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:12:28.213033    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-659000
	I0127 12:12:28.213109    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:28.213109    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:28.213109    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:28.217204    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:12:28.217889    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:28.217889    8732 round_trippers.go:580]     Audit-Id: 7c905aac-e1f1-41e5-9d84-213ab84ee1b4
	I0127 12:12:28.217889    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:28.217889    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:28.217889    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:28.217889    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:28.217889    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:28 GMT
	I0127 12:12:28.218439    8732 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-659000","namespace":"kube-system","uid":"d2a9c448-86a1-48e3-8b48-345c937e5bb4","resourceVersion":"391","creationTimestamp":"2025-01-27T12:11:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.29.204.17:2379","kubernetes.io/config.hash":"7291ea72d8be6e47ed8b536906d73549","kubernetes.io/config.mirror":"7291ea72d8be6e47ed8b536906d73549","kubernetes.io/config.seen":"2025-01-27T12:11:59.106493267Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:11:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6465 chars]
	I0127 12:12:28.218985    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:28.219044    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:28.219044    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:28.219044    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:28.223352    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:12:28.223376    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:28.223455    8732 round_trippers.go:580]     Audit-Id: e571736c-1ed8-4a0f-8e9e-f45cc045b604
	I0127 12:12:28.223475    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:28.223475    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:28.223475    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:28.223475    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:28.223475    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:28 GMT
	I0127 12:12:28.223688    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"423","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0127 12:12:28.224110    8732 pod_ready.go:93] pod "etcd-multinode-659000" in "kube-system" namespace has status "Ready":"True"
	I0127 12:12:28.224110    8732 pod_ready.go:82] duration metric: took 11.1865ms for pod "etcd-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:12:28.224110    8732 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:12:28.224110    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-659000
	I0127 12:12:28.224110    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:28.224110    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:28.224110    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:28.226887    8732 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:12:28.226887    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:28.226887    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:28.226887    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:28.226887    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:28 GMT
	I0127 12:12:28.226887    8732 round_trippers.go:580]     Audit-Id: 27e58a96-a2af-4bca-a294-ac5676fda7e7
	I0127 12:12:28.226887    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:28.226887    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:28.226887    8732 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-659000","namespace":"kube-system","uid":"f19e9efc-57cc-4e2a-b365-920592a7f352","resourceVersion":"397","creationTimestamp":"2025-01-27T12:11:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.29.204.17:8443","kubernetes.io/config.hash":"6bf31ca1befb4fb3e8f2fd27458a3b80","kubernetes.io/config.mirror":"6bf31ca1befb4fb3e8f2fd27458a3b80","kubernetes.io/config.seen":"2025-01-27T12:11:51.419792725Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:11:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0127 12:12:28.228257    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:28.228308    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:28.228308    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:28.228388    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:28.231198    8732 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:12:28.231198    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:28.231198    8732 round_trippers.go:580]     Audit-Id: c9179f81-cc95-48ed-8772-500b8ec1f844
	I0127 12:12:28.231251    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:28.231251    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:28.231251    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:28.231251    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:28.231251    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:28 GMT
	I0127 12:12:28.231317    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"423","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0127 12:12:28.231317    8732 pod_ready.go:93] pod "kube-apiserver-multinode-659000" in "kube-system" namespace has status "Ready":"True"
	I0127 12:12:28.231842    8732 pod_ready.go:82] duration metric: took 7.7322ms for pod "kube-apiserver-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:12:28.231884    8732 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:12:28.231884    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-659000
	I0127 12:12:28.231884    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:28.231884    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:28.231884    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:28.233481    8732 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0127 12:12:28.233481    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:28.233481    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:28.233481    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:28 GMT
	I0127 12:12:28.233481    8732 round_trippers.go:580]     Audit-Id: d4c2410e-7df6-43d5-90f7-b88a7cf601c3
	I0127 12:12:28.234431    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:28.234431    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:28.234431    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:28.234588    8732 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-659000","namespace":"kube-system","uid":"8be02f36-161c-44f3-b526-56db3b8a007a","resourceVersion":"401","creationTimestamp":"2025-01-27T12:11:59Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4a14d0700eafa36dd3913955f2c0f839","kubernetes.io/config.mirror":"4a14d0700eafa36dd3913955f2c0f839","kubernetes.io/config.seen":"2025-01-27T12:11:59.106472767Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:11:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0127 12:12:28.234588    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:28.235119    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:28.235119    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:28.235119    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:28.237429    8732 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:12:28.237429    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:28.237429    8732 round_trippers.go:580]     Audit-Id: 480c9d4a-224d-4f66-aa95-bd89452e975f
	I0127 12:12:28.237429    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:28.237429    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:28.237429    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:28.237429    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:28.237429    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:28 GMT
	I0127 12:12:28.237429    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"423","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0127 12:12:28.238046    8732 pod_ready.go:93] pod "kube-controller-manager-multinode-659000" in "kube-system" namespace has status "Ready":"True"
	I0127 12:12:28.238128    8732 pod_ready.go:82] duration metric: took 6.2441ms for pod "kube-controller-manager-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:12:28.238171    8732 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s46mv" in "kube-system" namespace to be "Ready" ...
	I0127 12:12:28.238251    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s46mv
	I0127 12:12:28.238251    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:28.238327    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:28.238349    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:28.242158    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:12:28.242158    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:28.242158    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:28.242158    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:28 GMT
	I0127 12:12:28.242158    8732 round_trippers.go:580]     Audit-Id: 5573c0df-e534-4a31-b3f4-29fa5eb8ccea
	I0127 12:12:28.242158    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:28.242158    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:28.242158    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:28.242999    8732 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s46mv","generateName":"kube-proxy-","namespace":"kube-system","uid":"ae3b8daf-d674-4cfe-8652-cb5ff6ba8615","resourceVersion":"392","creationTimestamp":"2025-01-27T12:12:03Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d88eb776-b464-4f2b-8140-44249610a7fa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d88eb776-b464-4f2b-8140-44249610a7fa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6194 chars]
	I0127 12:12:28.243721    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:28.243799    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:28.243821    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:28.243846    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:28.250582    8732 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:12:28.250582    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:28.250582    8732 round_trippers.go:580]     Audit-Id: 7fa48410-9c3f-4c00-997f-78820f85c8df
	I0127 12:12:28.250582    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:28.250582    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:28.250582    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:28.250582    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:28.250582    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:28 GMT
	I0127 12:12:28.250582    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"423","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0127 12:12:28.250582    8732 pod_ready.go:93] pod "kube-proxy-s46mv" in "kube-system" namespace has status "Ready":"True"
	I0127 12:12:28.250582    8732 pod_ready.go:82] duration metric: took 12.4107ms for pod "kube-proxy-s46mv" in "kube-system" namespace to be "Ready" ...
	I0127 12:12:28.250582    8732 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:12:28.403708    8732 request.go:632] Waited for 153.1247ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.204.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-659000
	I0127 12:12:28.403708    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-659000
	I0127 12:12:28.403708    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:28.403708    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:28.403708    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:28.407771    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:12:28.407771    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:28.407863    8732 round_trippers.go:580]     Audit-Id: b96ec1b6-dece-4975-b13d-eceb0c03dff7
	I0127 12:12:28.407863    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:28.407863    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:28.407863    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:28.407863    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:28.407863    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:28 GMT
	I0127 12:12:28.408039    8732 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-659000","namespace":"kube-system","uid":"52b91964-a331-4925-9e07-c8df32b4176d","resourceVersion":"403","creationTimestamp":"2025-01-27T12:11:58Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e6c90fc43fa6c0754218ff1c4162045d","kubernetes.io/config.mirror":"e6c90fc43fa6c0754218ff1c4162045d","kubernetes.io/config.seen":"2025-01-27T12:11:51.419790825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:11:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5357 chars]
	I0127 12:12:28.602921    8732 request.go:632] Waited for 194.4153ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:28.602921    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:12:28.603463    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:28.603463    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:28.603463    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:28.607132    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:12:28.607198    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:28.607198    8732 round_trippers.go:580]     Audit-Id: c03e3a53-193f-4ebd-a571-c49af1c303a7
	I0127 12:12:28.607198    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:28.607198    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:28.607198    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:28.607198    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:28.607198    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:28 GMT
	I0127 12:12:28.607417    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"423","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0127 12:12:28.607591    8732 pod_ready.go:93] pod "kube-scheduler-multinode-659000" in "kube-system" namespace has status "Ready":"True"
	I0127 12:12:28.607591    8732 pod_ready.go:82] duration metric: took 357.0055ms for pod "kube-scheduler-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:12:28.607591    8732 pod_ready.go:39] duration metric: took 2.4210508s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:12:28.607591    8732 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:12:28.618084    8732 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:12:28.646925    8732 command_runner.go:130] > 2097
	I0127 12:12:28.647015    8732 api_server.go:72] duration metric: took 25.2332339s to wait for apiserver process to appear ...
	I0127 12:12:28.647097    8732 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:12:28.647154    8732 api_server.go:253] Checking apiserver healthz at https://172.29.204.17:8443/healthz ...
	I0127 12:12:28.656076    8732 api_server.go:279] https://172.29.204.17:8443/healthz returned 200:
	ok
	I0127 12:12:28.656185    8732 round_trippers.go:463] GET https://172.29.204.17:8443/version
	I0127 12:12:28.656264    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:28.656321    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:28.656321    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:28.658639    8732 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:12:28.658639    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:28.658763    8732 round_trippers.go:580]     Content-Length: 263
	I0127 12:12:28.658763    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:28 GMT
	I0127 12:12:28.658763    8732 round_trippers.go:580]     Audit-Id: 0ca55e49-fd57-4d3c-b3b0-d20e70b72e1c
	I0127 12:12:28.658763    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:28.658763    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:28.658763    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:28.658763    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:28.658763    8732 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "32",
	  "gitVersion": "v1.32.1",
	  "gitCommit": "e9c9be4007d1664e68796af02b8978640d2c1b26",
	  "gitTreeState": "clean",
	  "buildDate": "2025-01-15T14:31:55Z",
	  "goVersion": "go1.23.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0127 12:12:28.658883    8732 api_server.go:141] control plane version: v1.32.1
	I0127 12:12:28.658937    8732 api_server.go:131] duration metric: took 11.8398ms to wait for apiserver health ...
	I0127 12:12:28.658937    8732 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:12:28.802753    8732 request.go:632] Waited for 143.7266ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.204.17:8443/api/v1/namespaces/kube-system/pods
	I0127 12:12:28.802753    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/namespaces/kube-system/pods
	I0127 12:12:28.802753    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:28.802753    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:28.802753    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:28.808426    8732 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:12:28.808426    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:28.808527    8732 round_trippers.go:580]     Audit-Id: 2a7206c0-c93f-4cd9-a1fd-43db5e2f268f
	I0127 12:12:28.808527    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:28.808527    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:28.808527    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:28.808527    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:28.808527    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:28 GMT
	I0127 12:12:28.809749    8732 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"442","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 58291 chars]
	I0127 12:12:28.812390    8732 system_pods.go:59] 8 kube-system pods found
	I0127 12:12:28.812447    8732 system_pods.go:61] "coredns-668d6bf9bc-2qw6w" [8f0367fc-d842-4cc3-8e71-30869a548243] Running
	I0127 12:12:28.812523    8732 system_pods.go:61] "etcd-multinode-659000" [d2a9c448-86a1-48e3-8b48-345c937e5bb4] Running
	I0127 12:12:28.812523    8732 system_pods.go:61] "kindnet-z2hqq" [9b617a9c-e2b8-45fd-bee2-45cb03d4cd42] Running
	I0127 12:12:28.812523    8732 system_pods.go:61] "kube-apiserver-multinode-659000" [f19e9efc-57cc-4e2a-b365-920592a7f352] Running
	I0127 12:12:28.812523    8732 system_pods.go:61] "kube-controller-manager-multinode-659000" [8be02f36-161c-44f3-b526-56db3b8a007a] Running
	I0127 12:12:28.812523    8732 system_pods.go:61] "kube-proxy-s46mv" [ae3b8daf-d674-4cfe-8652-cb5ff6ba8615] Running
	I0127 12:12:28.812523    8732 system_pods.go:61] "kube-scheduler-multinode-659000" [52b91964-a331-4925-9e07-c8df32b4176d] Running
	I0127 12:12:28.812523    8732 system_pods.go:61] "storage-provisioner" [bcfd7913-1bc0-4c24-882f-2be92ec9b046] Running
	I0127 12:12:28.812523    8732 system_pods.go:74] duration metric: took 153.5839ms to wait for pod list to return data ...
	I0127 12:12:28.812523    8732 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:12:29.002675    8732 request.go:632] Waited for 190.0115ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.204.17:8443/api/v1/namespaces/default/serviceaccounts
	I0127 12:12:29.002675    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/namespaces/default/serviceaccounts
	I0127 12:12:29.002675    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:29.002675    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:29.002675    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:29.007303    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:12:29.007303    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:29.007303    8732 round_trippers.go:580]     Audit-Id: 22806c41-2bfb-46c6-a426-f0e037f57c05
	I0127 12:12:29.007303    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:29.007303    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:29.007303    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:29.007303    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:29.007303    8732 round_trippers.go:580]     Content-Length: 261
	I0127 12:12:29.007303    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:29 GMT
	I0127 12:12:29.007303    8732 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"bff364bd-d78f-41e4-90bc-c2009fb4813f","resourceVersion":"328","creationTimestamp":"2025-01-27T12:12:03Z"}}]}
	I0127 12:12:29.007921    8732 default_sa.go:45] found service account: "default"
	I0127 12:12:29.008021    8732 default_sa.go:55] duration metric: took 195.4191ms for default service account to be created ...
	I0127 12:12:29.008021    8732 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 12:12:29.202727    8732 request.go:632] Waited for 194.6157ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.204.17:8443/api/v1/namespaces/kube-system/pods
	I0127 12:12:29.202985    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/namespaces/kube-system/pods
	I0127 12:12:29.202985    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:29.202985    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:29.202985    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:29.209841    8732 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:12:29.209841    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:29.209841    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:29.209841    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:29.209841    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:29.209841    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:29 GMT
	I0127 12:12:29.209841    8732 round_trippers.go:580]     Audit-Id: c9ef43eb-c2ff-4752-9b24-e8134b56ba0e
	I0127 12:12:29.209841    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:29.211736    8732 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"442","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 58291 chars]
	I0127 12:12:29.213569    8732 system_pods.go:87] 8 kube-system pods found
	I0127 12:12:29.403539    8732 request.go:632] Waited for 189.9675ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.204.17:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-dns
	I0127 12:12:29.404094    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-dns
	I0127 12:12:29.404094    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:29.404162    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:29.404162    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:29.408650    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:12:29.408732    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:29.408732    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:29.408732    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:29 GMT
	I0127 12:12:29.408732    8732 round_trippers.go:580]     Audit-Id: 3cd1019d-31a1-4040-9111-aa550782b6e5
	I0127 12:12:29.408732    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:29.408806    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:29.408806    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:29.409126    8732 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"442","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 6887 chars]
	I0127 12:12:29.409759    8732 system_pods.go:105] "coredns-668d6bf9bc-2qw6w" [8f0367fc-d842-4cc3-8e71-30869a548243] Running
	I0127 12:12:29.409759    8732 system_pods.go:105] "etcd-multinode-659000" [d2a9c448-86a1-48e3-8b48-345c937e5bb4] Running
	I0127 12:12:29.409759    8732 system_pods.go:105] "kindnet-z2hqq" [9b617a9c-e2b8-45fd-bee2-45cb03d4cd42] Running
	I0127 12:12:29.409759    8732 system_pods.go:105] "kube-apiserver-multinode-659000" [f19e9efc-57cc-4e2a-b365-920592a7f352] Running
	I0127 12:12:29.409759    8732 system_pods.go:105] "kube-controller-manager-multinode-659000" [8be02f36-161c-44f3-b526-56db3b8a007a] Running
	I0127 12:12:29.409759    8732 system_pods.go:105] "kube-proxy-s46mv" [ae3b8daf-d674-4cfe-8652-cb5ff6ba8615] Running
	I0127 12:12:29.409759    8732 system_pods.go:105] "kube-scheduler-multinode-659000" [52b91964-a331-4925-9e07-c8df32b4176d] Running
	I0127 12:12:29.409759    8732 system_pods.go:105] "storage-provisioner" [bcfd7913-1bc0-4c24-882f-2be92ec9b046] Running
	I0127 12:12:29.409759    8732 system_pods.go:147] duration metric: took 401.7345ms to wait for k8s-apps to be running ...
	I0127 12:12:29.409759    8732 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 12:12:29.420100    8732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:12:29.447808    8732 system_svc.go:56] duration metric: took 38.0487ms WaitForService to wait for kubelet
	I0127 12:12:29.447808    8732 kubeadm.go:582] duration metric: took 26.0340195s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:12:29.447808    8732 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:12:29.602878    8732 request.go:632] Waited for 155.0681ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.204.17:8443/api/v1/nodes
	I0127 12:12:29.604495    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes
	I0127 12:12:29.604586    8732 round_trippers.go:469] Request Headers:
	I0127 12:12:29.604673    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:12:29.604673    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:12:29.611536    8732 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:12:29.611653    8732 round_trippers.go:577] Response Headers:
	I0127 12:12:29.611653    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:12:29.611653    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:12:29.611653    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:12:29.611653    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:12:29 GMT
	I0127 12:12:29.611653    8732 round_trippers.go:580]     Audit-Id: b3e822a3-a495-435b-9466-236b9a33ae18
	I0127 12:12:29.611653    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:12:29.611741    8732 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"423","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4835 chars]
	I0127 12:12:29.612461    8732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:12:29.612461    8732 node_conditions.go:123] node cpu capacity is 2
	I0127 12:12:29.612548    8732 node_conditions.go:105] duration metric: took 164.7379ms to run NodePressure ...
	I0127 12:12:29.612548    8732 start.go:241] waiting for startup goroutines ...
	I0127 12:12:29.612548    8732 start.go:246] waiting for cluster config update ...
	I0127 12:12:29.612548    8732 start.go:255] writing updated cluster config ...
	I0127 12:12:29.619127    8732 out.go:201] 
	I0127 12:12:29.623563    8732 config.go:182] Loaded profile config "ha-011400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:12:29.632285    8732 config.go:182] Loaded profile config "multinode-659000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:12:29.632285    8732 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\config.json ...
	I0127 12:12:29.641272    8732 out.go:177] * Starting "multinode-659000-m02" worker node in "multinode-659000" cluster
	I0127 12:12:29.643780    8732 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0127 12:12:29.643780    8732 cache.go:56] Caching tarball of preloaded images
	I0127 12:12:29.644721    8732 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 12:12:29.644721    8732 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0127 12:12:29.644721    8732 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\config.json ...
	I0127 12:12:29.649981    8732 start.go:360] acquireMachinesLock for multinode-659000-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:12:29.650647    8732 start.go:364] duration metric: took 665.5µs to acquireMachinesLock for "multinode-659000-m02"
	I0127 12:12:29.650827    8732 start.go:93] Provisioning new machine with config: &{Name:multinode-659000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-659000
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.204.17 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString
:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0127 12:12:29.650827    8732 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0127 12:12:29.652849    8732 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0127 12:12:29.653796    8732 start.go:159] libmachine.API.Create for "multinode-659000" (driver="hyperv")
	I0127 12:12:29.653796    8732 client.go:168] LocalClient.Create starting
	I0127 12:12:29.653796    8732 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem
	I0127 12:12:29.654800    8732 main.go:141] libmachine: Decoding PEM data...
	I0127 12:12:29.654800    8732 main.go:141] libmachine: Parsing certificate...
	I0127 12:12:29.654800    8732 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem
	I0127 12:12:29.654800    8732 main.go:141] libmachine: Decoding PEM data...
	I0127 12:12:29.654800    8732 main.go:141] libmachine: Parsing certificate...
	I0127 12:12:29.654800    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0127 12:12:31.472934    8732 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0127 12:12:31.473882    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:12:31.473912    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0127 12:12:33.141880    8732 main.go:141] libmachine: [stdout =====>] : False
	
	I0127 12:12:33.141880    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:12:33.141880    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0127 12:12:34.561694    8732 main.go:141] libmachine: [stdout =====>] : True
	
	I0127 12:12:34.561694    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:12:34.561694    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0127 12:12:38.066134    8732 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0127 12:12:38.067161    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:12:38.069436    8732 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube6/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0127 12:12:38.587840    8732 main.go:141] libmachine: Creating SSH key...
	I0127 12:12:38.718883    8732 main.go:141] libmachine: Creating VM...
	I0127 12:12:38.718883    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0127 12:12:41.575966    8732 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0127 12:12:41.576788    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:12:41.576909    8732 main.go:141] libmachine: Using switch "Default Switch"
	I0127 12:12:41.576909    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0127 12:12:43.258788    8732 main.go:141] libmachine: [stdout =====>] : True
	
	I0127 12:12:43.258788    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:12:43.258788    8732 main.go:141] libmachine: Creating VHD
	I0127 12:12:43.258788    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0127 12:12:46.984147    8732 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube6
	Path                    : C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : F08CD109-90DC-4472-A47F-BFA8337DE546
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0127 12:12:46.985180    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:12:46.985180    8732 main.go:141] libmachine: Writing magic tar header
	I0127 12:12:46.985394    8732 main.go:141] libmachine: Writing SSH key tar header
	I0127 12:12:46.997663    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0127 12:12:50.130020    8732 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:12:50.130020    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:12:50.130558    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000-m02\disk.vhd' -SizeBytes 20000MB
	I0127 12:12:52.619296    8732 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:12:52.619296    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:12:52.620374    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-659000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0127 12:12:56.176214    8732 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-659000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0127 12:12:56.176214    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:12:56.176214    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-659000-m02 -DynamicMemoryEnabled $false
	I0127 12:12:58.394367    8732 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:12:58.394661    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:12:58.394661    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-659000-m02 -Count 2
	I0127 12:13:00.599433    8732 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:13:00.599433    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:13:00.599721    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-659000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000-m02\boot2docker.iso'
	I0127 12:13:03.144955    8732 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:13:03.145601    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:13:03.145698    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-659000-m02 -Path 'C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000-m02\disk.vhd'
	I0127 12:13:05.744516    8732 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:13:05.744789    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:13:05.744789    8732 main.go:141] libmachine: Starting VM...
	I0127 12:13:05.744789    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-659000-m02
	I0127 12:13:08.825705    8732 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:13:08.825705    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:13:08.825705    8732 main.go:141] libmachine: Waiting for host to start...
	I0127 12:13:08.826451    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:13:11.075638    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:13:11.076455    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:13:11.076527    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:13:13.536696    8732 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:13:13.536696    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:13:14.537408    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:13:16.709135    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:13:16.709135    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:13:16.709491    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:13:19.253957    8732 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:13:19.254051    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:13:20.254277    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:13:22.421252    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:13:22.421252    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:13:22.422270    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:13:24.958952    8732 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:13:24.959015    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:13:25.959390    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:13:28.152548    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:13:28.153545    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:13:28.153545    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:13:30.660444    8732 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:13:30.661241    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:13:31.661599    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:13:33.866887    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:13:33.866887    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:13:33.866887    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:13:36.409800    8732 main.go:141] libmachine: [stdout =====>] : 172.29.199.129
	
	I0127 12:13:36.410601    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:13:36.410601    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:13:38.477276    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:13:38.477276    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:13:38.477385    8732 machine.go:93] provisionDockerMachine start ...
	I0127 12:13:38.477523    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:13:40.539825    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:13:40.540796    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:13:40.540883    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:13:42.950459    8732 main.go:141] libmachine: [stdout =====>] : 172.29.199.129
	
	I0127 12:13:42.950459    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:13:42.956514    8732 main.go:141] libmachine: Using SSH client type: native
	I0127 12:13:42.972396    8732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.199.129 22 <nil> <nil>}
	I0127 12:13:42.972570    8732 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 12:13:43.103132    8732 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 12:13:43.103199    8732 buildroot.go:166] provisioning hostname "multinode-659000-m02"
	I0127 12:13:43.103259    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:13:45.184959    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:13:45.184959    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:13:45.184959    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:13:47.653517    8732 main.go:141] libmachine: [stdout =====>] : 172.29.199.129
	
	I0127 12:13:47.653517    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:13:47.659879    8732 main.go:141] libmachine: Using SSH client type: native
	I0127 12:13:47.659950    8732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.199.129 22 <nil> <nil>}
	I0127 12:13:47.659950    8732 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-659000-m02 && echo "multinode-659000-m02" | sudo tee /etc/hostname
	I0127 12:13:47.824074    8732 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-659000-m02
	
	I0127 12:13:47.824074    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:13:49.889062    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:13:49.889062    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:13:49.889544    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:13:52.387653    8732 main.go:141] libmachine: [stdout =====>] : 172.29.199.129
	
	I0127 12:13:52.387874    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:13:52.394194    8732 main.go:141] libmachine: Using SSH client type: native
	I0127 12:13:52.394194    8732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.199.129 22 <nil> <nil>}
	I0127 12:13:52.394194    8732 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-659000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-659000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-659000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:13:52.545981    8732 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:13:52.545981    8732 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0127 12:13:52.545981    8732 buildroot.go:174] setting up certificates
	I0127 12:13:52.545981    8732 provision.go:84] configureAuth start
	I0127 12:13:52.545981    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:13:54.616808    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:13:54.617595    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:13:54.617697    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:13:57.111481    8732 main.go:141] libmachine: [stdout =====>] : 172.29.199.129
	
	I0127 12:13:57.112475    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:13:57.112511    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:13:59.193518    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:13:59.193518    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:13:59.193518    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:14:01.736710    8732 main.go:141] libmachine: [stdout =====>] : 172.29.199.129
	
	I0127 12:14:01.736710    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:14:01.737682    8732 provision.go:143] copyHostCerts
	I0127 12:14:01.737814    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0127 12:14:01.738573    8732 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0127 12:14:01.738573    8732 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0127 12:14:01.739164    8732 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0127 12:14:01.739936    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0127 12:14:01.740738    8732 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0127 12:14:01.740738    8732 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0127 12:14:01.741059    8732 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0127 12:14:01.741382    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0127 12:14:01.742554    8732 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0127 12:14:01.742661    8732 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0127 12:14:01.743168    8732 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0127 12:14:01.744848    8732 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-659000-m02 san=[127.0.0.1 172.29.199.129 localhost minikube multinode-659000-m02]
	I0127 12:14:01.874161    8732 provision.go:177] copyRemoteCerts
	I0127 12:14:01.884880    8732 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:14:01.884880    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:14:04.011327    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:14:04.011875    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:14:04.011931    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:14:06.518522    8732 main.go:141] libmachine: [stdout =====>] : 172.29.199.129
	
	I0127 12:14:06.518711    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:14:06.518941    8732 sshutil.go:53] new ssh client: &{IP:172.29.199.129 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000-m02\id_rsa Username:docker}
	I0127 12:14:06.620357    8732 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7353972s)
	I0127 12:14:06.620357    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0127 12:14:06.620357    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0127 12:14:06.672928    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0127 12:14:06.672928    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 12:14:06.721593    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0127 12:14:06.722238    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0127 12:14:06.772328    8732 provision.go:87] duration metric: took 14.2261992s to configureAuth
	I0127 12:14:06.772442    8732 buildroot.go:189] setting minikube options for container-runtime
	I0127 12:14:06.773277    8732 config.go:182] Loaded profile config "multinode-659000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:14:06.773363    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:14:08.881047    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:14:08.881232    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:14:08.881307    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:14:11.388474    8732 main.go:141] libmachine: [stdout =====>] : 172.29.199.129
	
	I0127 12:14:11.389499    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:14:11.394278    8732 main.go:141] libmachine: Using SSH client type: native
	I0127 12:14:11.395058    8732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.199.129 22 <nil> <nil>}
	I0127 12:14:11.395058    8732 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0127 12:14:11.539278    8732 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0127 12:14:11.539278    8732 buildroot.go:70] root file system type: tmpfs
	I0127 12:14:11.539278    8732 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0127 12:14:11.539278    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:14:13.633122    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:14:13.633122    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:14:13.633577    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:14:16.118909    8732 main.go:141] libmachine: [stdout =====>] : 172.29.199.129
	
	I0127 12:14:16.118909    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:14:16.125899    8732 main.go:141] libmachine: Using SSH client type: native
	I0127 12:14:16.125899    8732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.199.129 22 <nil> <nil>}
	I0127 12:14:16.126595    8732 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.29.204.17"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0127 12:14:16.288868    8732 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.29.204.17
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0127 12:14:16.289000    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:14:18.351700    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:14:18.351700    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:14:18.352256    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:14:20.807196    8732 main.go:141] libmachine: [stdout =====>] : 172.29.199.129
	
	I0127 12:14:20.807196    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:14:20.811761    8732 main.go:141] libmachine: Using SSH client type: native
	I0127 12:14:20.812486    8732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.199.129 22 <nil> <nil>}
	I0127 12:14:20.812542    8732 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0127 12:14:23.035183    8732 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0127 12:14:23.035260    8732 machine.go:96] duration metric: took 44.557364s to provisionDockerMachine
	I0127 12:14:23.035316    8732 client.go:171] duration metric: took 1m53.38034s to LocalClient.Create
	I0127 12:14:23.035426    8732 start.go:167] duration metric: took 1m53.3803713s to libmachine.API.Create "multinode-659000"
	I0127 12:14:23.035426    8732 start.go:293] postStartSetup for "multinode-659000-m02" (driver="hyperv")
	I0127 12:14:23.035487    8732 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:14:23.048726    8732 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:14:23.048726    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:14:25.188864    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:14:25.188864    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:14:25.189402    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:14:27.646300    8732 main.go:141] libmachine: [stdout =====>] : 172.29.199.129
	
	I0127 12:14:27.647237    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:14:27.647826    8732 sshutil.go:53] new ssh client: &{IP:172.29.199.129 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000-m02\id_rsa Username:docker}
	I0127 12:14:27.749981    8732 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7012059s)
	I0127 12:14:27.760634    8732 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:14:27.767886    8732 command_runner.go:130] > NAME=Buildroot
	I0127 12:14:27.768091    8732 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0127 12:14:27.768091    8732 command_runner.go:130] > ID=buildroot
	I0127 12:14:27.768091    8732 command_runner.go:130] > VERSION_ID=2023.02.9
	I0127 12:14:27.768091    8732 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0127 12:14:27.768091    8732 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 12:14:27.768091    8732 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0127 12:14:27.768559    8732 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0127 12:14:27.769806    8732 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> 59562.pem in /etc/ssl/certs
	I0127 12:14:27.769806    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> /etc/ssl/certs/59562.pem
	I0127 12:14:27.779629    8732 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:14:27.796210    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem --> /etc/ssl/certs/59562.pem (1708 bytes)
	I0127 12:14:27.837443    8732 start.go:296] duration metric: took 4.8019665s for postStartSetup
	I0127 12:14:27.840209    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:14:29.954920    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:14:29.954995    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:14:29.955084    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:14:32.425325    8732 main.go:141] libmachine: [stdout =====>] : 172.29.199.129
	
	I0127 12:14:32.425529    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:14:32.425914    8732 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\config.json ...
	I0127 12:14:32.429648    8732 start.go:128] duration metric: took 2m2.777544s to createHost
	I0127 12:14:32.429793    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:14:34.494173    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:14:34.494358    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:14:34.494358    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:14:36.929937    8732 main.go:141] libmachine: [stdout =====>] : 172.29.199.129
	
	I0127 12:14:36.930299    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:14:36.936306    8732 main.go:141] libmachine: Using SSH client type: native
	I0127 12:14:36.936306    8732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.199.129 22 <nil> <nil>}
	I0127 12:14:36.936915    8732 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 12:14:37.070293    8732 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737980077.081561556
	
	I0127 12:14:37.070293    8732 fix.go:216] guest clock: 1737980077.081561556
	I0127 12:14:37.070293    8732 fix.go:229] Guest: 2025-01-27 12:14:37.081561556 +0000 UTC Remote: 2025-01-27 12:14:32.4296484 +0000 UTC m=+331.267461801 (delta=4.651913156s)
	I0127 12:14:37.070293    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:14:39.182394    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:14:39.182672    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:14:39.182672    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:14:41.621377    8732 main.go:141] libmachine: [stdout =====>] : 172.29.199.129
	
	I0127 12:14:41.622169    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:14:41.627373    8732 main.go:141] libmachine: Using SSH client type: native
	I0127 12:14:41.628141    8732 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.199.129 22 <nil> <nil>}
	I0127 12:14:41.628141    8732 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1737980077
	I0127 12:14:41.773465    8732 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 27 12:14:37 UTC 2025
	
	I0127 12:14:41.773465    8732 fix.go:236] clock set: Mon Jan 27 12:14:37 UTC 2025
	 (err=<nil>)
	I0127 12:14:41.773465    8732 start.go:83] releasing machines lock for "multinode-659000-m02", held for 2m12.1214438s
	I0127 12:14:41.773465    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:14:43.817562    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:14:43.817562    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:14:43.818270    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:14:46.347614    8732 main.go:141] libmachine: [stdout =====>] : 172.29.199.129
	
	I0127 12:14:46.347614    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:14:46.351528    8732 out.go:177] * Found network options:
	I0127 12:14:46.354048    8732 out.go:177]   - NO_PROXY=172.29.204.17
	W0127 12:14:46.356539    8732 proxy.go:119] fail to check proxy env: Error ip not in block
	I0127 12:14:46.359429    8732 out.go:177]   - NO_PROXY=172.29.204.17
	W0127 12:14:46.361933    8732 proxy.go:119] fail to check proxy env: Error ip not in block
	W0127 12:14:46.363024    8732 proxy.go:119] fail to check proxy env: Error ip not in block
	I0127 12:14:46.366314    8732 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0127 12:14:46.366409    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:14:46.376566    8732 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0127 12:14:46.376566    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:14:48.575107    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:14:48.575315    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:14:48.575315    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:14:48.575315    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:14:48.575315    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:14:48.575315    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:14:51.203319    8732 main.go:141] libmachine: [stdout =====>] : 172.29.199.129
	
	I0127 12:14:51.203490    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:14:51.203490    8732 sshutil.go:53] new ssh client: &{IP:172.29.199.129 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000-m02\id_rsa Username:docker}
	I0127 12:14:51.225367    8732 main.go:141] libmachine: [stdout =====>] : 172.29.199.129
	
	I0127 12:14:51.225367    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:14:51.226008    8732 sshutil.go:53] new ssh client: &{IP:172.29.199.129 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000-m02\id_rsa Username:docker}
	I0127 12:14:51.312520    8732 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0127 12:14:51.313509    8732 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9368916s)
	W0127 12:14:51.313509    8732 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 12:14:51.324746    8732 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0127 12:14:51.324746    8732 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9583402s)
	W0127 12:14:51.324746    8732 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0127 12:14:51.326729    8732 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:14:51.355907    8732 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0127 12:14:51.355969    8732 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 12:14:51.356004    8732 start.go:495] detecting cgroup driver to use...
	I0127 12:14:51.356201    8732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:14:51.392993    8732 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0127 12:14:51.407534    8732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 12:14:51.436814    8732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 12:14:51.455757    8732 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 12:14:51.467299    8732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W0127 12:14:51.475344    8732 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0127 12:14:51.475344    8732 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0127 12:14:51.511902    8732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:14:51.542395    8732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 12:14:51.571146    8732 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:14:51.600054    8732 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:14:51.628820    8732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 12:14:51.659417    8732 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 12:14:51.691729    8732 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 12:14:51.720653    8732 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:14:51.737927    8732 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:14:51.737927    8732 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:14:51.751160    8732 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 12:14:51.782245    8732 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:14:51.814505    8732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:14:52.006097    8732 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 12:14:52.037532    8732 start.go:495] detecting cgroup driver to use...
	I0127 12:14:52.049633    8732 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0127 12:14:52.075375    8732 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0127 12:14:52.075410    8732 command_runner.go:130] > [Unit]
	I0127 12:14:52.075410    8732 command_runner.go:130] > Description=Docker Application Container Engine
	I0127 12:14:52.075410    8732 command_runner.go:130] > Documentation=https://docs.docker.com
	I0127 12:14:52.075410    8732 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0127 12:14:52.075410    8732 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0127 12:14:52.075484    8732 command_runner.go:130] > StartLimitBurst=3
	I0127 12:14:52.075484    8732 command_runner.go:130] > StartLimitIntervalSec=60
	I0127 12:14:52.075484    8732 command_runner.go:130] > [Service]
	I0127 12:14:52.075484    8732 command_runner.go:130] > Type=notify
	I0127 12:14:52.075484    8732 command_runner.go:130] > Restart=on-failure
	I0127 12:14:52.075484    8732 command_runner.go:130] > Environment=NO_PROXY=172.29.204.17
	I0127 12:14:52.075484    8732 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0127 12:14:52.075543    8732 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0127 12:14:52.075543    8732 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0127 12:14:52.075543    8732 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0127 12:14:52.075543    8732 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0127 12:14:52.075593    8732 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0127 12:14:52.075593    8732 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0127 12:14:52.075593    8732 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0127 12:14:52.075593    8732 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0127 12:14:52.075593    8732 command_runner.go:130] > ExecStart=
	I0127 12:14:52.075593    8732 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0127 12:14:52.075845    8732 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0127 12:14:52.075907    8732 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0127 12:14:52.075907    8732 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0127 12:14:52.075907    8732 command_runner.go:130] > LimitNOFILE=infinity
	I0127 12:14:52.075907    8732 command_runner.go:130] > LimitNPROC=infinity
	I0127 12:14:52.075907    8732 command_runner.go:130] > LimitCORE=infinity
	I0127 12:14:52.075907    8732 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0127 12:14:52.075907    8732 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0127 12:14:52.075907    8732 command_runner.go:130] > TasksMax=infinity
	I0127 12:14:52.075907    8732 command_runner.go:130] > TimeoutStartSec=0
	I0127 12:14:52.075907    8732 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0127 12:14:52.075907    8732 command_runner.go:130] > Delegate=yes
	I0127 12:14:52.075907    8732 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0127 12:14:52.075907    8732 command_runner.go:130] > KillMode=process
	I0127 12:14:52.075907    8732 command_runner.go:130] > [Install]
	I0127 12:14:52.075907    8732 command_runner.go:130] > WantedBy=multi-user.target
	I0127 12:14:52.086895    8732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 12:14:52.119165    8732 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 12:14:52.156340    8732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 12:14:52.191689    8732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 12:14:52.225188    8732 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 12:14:52.280084    8732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 12:14:52.302058    8732 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:14:52.338807    8732 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0127 12:14:52.349245    8732 ssh_runner.go:195] Run: which cri-dockerd
	I0127 12:14:52.355561    8732 command_runner.go:130] > /usr/bin/cri-dockerd
	I0127 12:14:52.366304    8732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0127 12:14:52.384167    8732 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0127 12:14:52.432055    8732 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0127 12:14:52.632021    8732 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0127 12:14:52.812096    8732 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0127 12:14:52.812287    8732 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0127 12:14:52.853852    8732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:14:53.042647    8732 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0127 12:14:55.603849    8732 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5611122s)
	I0127 12:14:55.614240    8732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0127 12:14:55.647534    8732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0127 12:14:55.678399    8732 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0127 12:14:55.856892    8732 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0127 12:14:56.064873    8732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:14:56.272587    8732 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0127 12:14:56.312399    8732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0127 12:14:56.347143    8732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:14:56.539919    8732 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0127 12:14:56.651246    8732 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0127 12:14:56.662843    8732 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0127 12:14:56.672048    8732 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0127 12:14:56.672048    8732 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0127 12:14:56.672048    8732 command_runner.go:130] > Device: 0,22	Inode: 887         Links: 1
	I0127 12:14:56.672999    8732 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0127 12:14:56.673031    8732 command_runner.go:130] > Access: 2025-01-27 12:14:56.575886874 +0000
	I0127 12:14:56.673031    8732 command_runner.go:130] > Modify: 2025-01-27 12:14:56.575886874 +0000
	I0127 12:14:56.673031    8732 command_runner.go:130] > Change: 2025-01-27 12:14:56.580886906 +0000
	I0127 12:14:56.673031    8732 command_runner.go:130] >  Birth: -
	I0127 12:14:56.673142    8732 start.go:563] Will wait 60s for crictl version
	I0127 12:14:56.684353    8732 ssh_runner.go:195] Run: which crictl
	I0127 12:14:56.690144    8732 command_runner.go:130] > /usr/bin/crictl
	I0127 12:14:56.699817    8732 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:14:56.752872    8732 command_runner.go:130] > Version:  0.1.0
	I0127 12:14:56.752872    8732 command_runner.go:130] > RuntimeName:  docker
	I0127 12:14:56.752872    8732 command_runner.go:130] > RuntimeVersion:  27.4.0
	I0127 12:14:56.752872    8732 command_runner.go:130] > RuntimeApiVersion:  v1
	I0127 12:14:56.752872    8732 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0127 12:14:56.761928    8732 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 12:14:56.795581    8732 command_runner.go:130] > 27.4.0
	I0127 12:14:56.805606    8732 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 12:14:56.833584    8732 command_runner.go:130] > 27.4.0
	I0127 12:14:56.840893    8732 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0127 12:14:56.844163    8732 out.go:177]   - env NO_PROXY=172.29.204.17
	I0127 12:14:56.846548    8732 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0127 12:14:56.851931    8732 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0127 12:14:56.851956    8732 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0127 12:14:56.852016    8732 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0127 12:14:56.852016    8732 ip.go:211] Found interface: {Index:17 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:43:05:a6 Flags:up|broadcast|multicast|running}
	I0127 12:14:56.855812    8732 ip.go:214] interface addr: fe80::8ceb:a58b:811a:7c79/64
	I0127 12:14:56.855812    8732 ip.go:214] interface addr: 172.29.192.1/20
	I0127 12:14:56.866786    8732 ssh_runner.go:195] Run: grep 172.29.192.1	host.minikube.internal$ /etc/hosts
	I0127 12:14:56.872423    8732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:14:56.892566    8732 mustload.go:65] Loading cluster: multinode-659000
	I0127 12:14:56.892789    8732 config.go:182] Loaded profile config "multinode-659000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:14:56.894171    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:14:58.939086    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:14:58.939925    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:14:58.939925    8732 host.go:66] Checking if "multinode-659000" exists ...
	I0127 12:14:58.940793    8732 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000 for IP: 172.29.199.129
	I0127 12:14:58.940825    8732 certs.go:194] generating shared ca certs ...
	I0127 12:14:58.940856    8732 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:14:58.941611    8732 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0127 12:14:58.942151    8732 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0127 12:14:58.942441    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0127 12:14:58.942575    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0127 12:14:58.942575    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0127 12:14:58.943261    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0127 12:14:58.943896    8732 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem (1338 bytes)
	W0127 12:14:58.944251    8732 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956_empty.pem, impossibly tiny 0 bytes
	I0127 12:14:58.944425    8732 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0127 12:14:58.944570    8732 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0127 12:14:58.945659    8732 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0127 12:14:58.945659    8732 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0127 12:14:58.946509    8732 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem (1708 bytes)
	I0127 12:14:58.946558    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem -> /usr/share/ca-certificates/5956.pem
	I0127 12:14:58.946558    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> /usr/share/ca-certificates/59562.pem
	I0127 12:14:58.946558    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:14:58.947283    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 12:14:58.995439    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 12:14:59.043511    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 12:14:59.093704    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 12:14:59.143960    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem --> /usr/share/ca-certificates/5956.pem (1338 bytes)
	I0127 12:14:59.196705    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem --> /usr/share/ca-certificates/59562.pem (1708 bytes)
	I0127 12:14:59.245054    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 12:14:59.303594    8732 ssh_runner.go:195] Run: openssl version
	I0127 12:14:59.312831    8732 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0127 12:14:59.324236    8732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5956.pem && ln -fs /usr/share/ca-certificates/5956.pem /etc/ssl/certs/5956.pem"
	I0127 12:14:59.355840    8732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5956.pem
	I0127 12:14:59.363576    8732 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 27 10:52 /usr/share/ca-certificates/5956.pem
	I0127 12:14:59.363680    8732 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:52 /usr/share/ca-certificates/5956.pem
	I0127 12:14:59.375151    8732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5956.pem
	I0127 12:14:59.383947    8732 command_runner.go:130] > 51391683
	I0127 12:14:59.395894    8732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5956.pem /etc/ssl/certs/51391683.0"
	I0127 12:14:59.431761    8732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59562.pem && ln -fs /usr/share/ca-certificates/59562.pem /etc/ssl/certs/59562.pem"
	I0127 12:14:59.466908    8732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59562.pem
	I0127 12:14:59.474305    8732 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 27 10:52 /usr/share/ca-certificates/59562.pem
	I0127 12:14:59.474432    8732 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:52 /usr/share/ca-certificates/59562.pem
	I0127 12:14:59.484370    8732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59562.pem
	I0127 12:14:59.494335    8732 command_runner.go:130] > 3ec20f2e
	I0127 12:14:59.505507    8732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59562.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 12:14:59.541294    8732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 12:14:59.573869    8732 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:14:59.582068    8732 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 27 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:14:59.582068    8732 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:14:59.592872    8732 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:14:59.602987    8732 command_runner.go:130] > b5213941
	I0127 12:14:59.614332    8732 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 12:14:59.648861    8732 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:14:59.658501    8732 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 12:14:59.658631    8732 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 12:14:59.659442    8732 kubeadm.go:934] updating node {m02 172.29.199.129 8443 v1.32.1 docker false true} ...
	I0127 12:14:59.659442    8732 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-659000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.199.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:multinode-659000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 12:14:59.675647    8732 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 12:14:59.702041    8732 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.32.1': No such file or directory
	I0127 12:14:59.702113    8732 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.1': No such file or directory
	
	Initiating transfer...
	I0127 12:14:59.714548    8732 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.1
	I0127 12:14:59.738112    8732 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubectl.sha256
	I0127 12:14:59.738254    8732 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubeadm.sha256
	I0127 12:14:59.738254    8732 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/linux/amd64/kubelet.sha256
	I0127 12:14:59.738351    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl -> /var/lib/minikube/binaries/v1.32.1/kubectl
	I0127 12:14:59.738351    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm -> /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0127 12:14:59.752443    8732 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm
	I0127 12:14:59.752443    8732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:14:59.754198    8732 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl
	I0127 12:14:59.766876    8732 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubeadm': No such file or directory
	I0127 12:14:59.766876    8732 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubeadm': No such file or directory
	I0127 12:14:59.767077    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubeadm --> /var/lib/minikube/binaries/v1.32.1/kubeadm (70942872 bytes)
	I0127 12:14:59.784768    8732 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubectl': No such file or directory
	I0127 12:14:59.784837    8732 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubectl': No such file or directory
	I0127 12:14:59.784973    8732 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet -> /var/lib/minikube/binaries/v1.32.1/kubelet
	I0127 12:14:59.785052    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubectl --> /var/lib/minikube/binaries/v1.32.1/kubectl (57323672 bytes)
	I0127 12:14:59.799151    8732 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet
	I0127 12:14:59.855761    8732 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubelet': No such file or directory
	I0127 12:14:59.864718    8732 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.1/kubelet': No such file or directory
	I0127 12:14:59.864960    8732 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\linux\amd64\v1.32.1/kubelet --> /var/lib/minikube/binaries/v1.32.1/kubelet (77398276 bytes)
	I0127 12:15:01.201446    8732 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0127 12:15:01.219775    8732 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0127 12:15:01.254786    8732 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 12:15:01.297255    8732 ssh_runner.go:195] Run: grep 172.29.204.17	control-plane.minikube.internal$ /etc/hosts
	I0127 12:15:01.309024    8732 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.204.17	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:15:01.346545    8732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:15:01.543733    8732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:15:01.576081    8732 host.go:66] Checking if "multinode-659000" exists ...
	I0127 12:15:01.577379    8732 start.go:317] joinCluster: &{Name:multinode-659000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-659000 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.204.17 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.199.129 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\
jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:15:01.577596    8732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0127 12:15:01.577768    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:15:03.713488    8732 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:15:03.714557    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:15:03.714557    8732 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:15:06.173587    8732 main.go:141] libmachine: [stdout =====>] : 172.29.204.17
	
	I0127 12:15:06.173587    8732 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:15:06.174690    8732 sshutil.go:53] new ssh client: &{IP:172.29.204.17 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000\id_rsa Username:docker}
	I0127 12:15:06.383639    8732 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 3trmm8.u4mvfd02zrectic2 --discovery-token-ca-cert-hash sha256:7b53ba02e26824ededdf08178373023b65bda8005ddad46edfe91cb6a3cb8d3f 
	I0127 12:15:06.383639    8732 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm token create --print-join-command --ttl=0": (4.8059928s)
	I0127 12:15:06.384593    8732 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.29.199.129 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0127 12:15:06.384593    8732 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3trmm8.u4mvfd02zrectic2 --discovery-token-ca-cert-hash sha256:7b53ba02e26824ededdf08178373023b65bda8005ddad46edfe91cb6a3cb8d3f --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-659000-m02"
	I0127 12:15:06.551227    8732 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0127 12:15:07.873333    8732 command_runner.go:130] > [preflight] Running pre-flight checks
	I0127 12:15:07.873521    8732 command_runner.go:130] > [preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
	I0127 12:15:07.873521    8732 command_runner.go:130] > [preflight] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.
	I0127 12:15:07.873521    8732 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:15:07.873521    8732 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:15:07.873521    8732 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0127 12:15:07.873521    8732 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0127 12:15:07.873521    8732 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001769016s
	I0127 12:15:07.873608    8732 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0127 12:15:07.873608    8732 command_runner.go:130] > This node has joined the cluster:
	I0127 12:15:07.873608    8732 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0127 12:15:07.873650    8732 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0127 12:15:07.873650    8732 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0127 12:15:07.873650    8732 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3trmm8.u4mvfd02zrectic2 --discovery-token-ca-cert-hash sha256:7b53ba02e26824ededdf08178373023b65bda8005ddad46edfe91cb6a3cb8d3f --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-659000-m02": (1.489042s)
	I0127 12:15:07.873730    8732 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0127 12:15:08.090784    8732 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0127 12:15:08.281636    8732 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-659000-m02 minikube.k8s.io/updated_at=2025_01_27T12_15_08_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650 minikube.k8s.io/name=multinode-659000 minikube.k8s.io/primary=false
	I0127 12:15:08.411985    8732 command_runner.go:130] > node/multinode-659000-m02 labeled
	I0127 12:15:08.412251    8732 start.go:319] duration metric: took 6.8348003s to joinCluster
	I0127 12:15:08.412326    8732 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.29.199.129 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0127 12:15:08.413068    8732 config.go:182] Loaded profile config "multinode-659000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:15:08.415503    8732 out.go:177] * Verifying Kubernetes components...
	I0127 12:15:08.430854    8732 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:15:08.644030    8732 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:15:08.674222    8732 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0127 12:15:08.675129    8732 kapi.go:59] client config for multinode-659000: &rest.Config{Host:"https://172.29.204.17:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-659000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-659000\\client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x301e420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 12:15:08.676246    8732 node_ready.go:35] waiting up to 6m0s for node "multinode-659000-m02" to be "Ready" ...
	I0127 12:15:08.676491    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:08.676585    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:08.676585    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:08.676643    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:08.692022    8732 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0127 12:15:08.692107    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:08.692107    8732 round_trippers.go:580]     Content-Length: 4030
	I0127 12:15:08.692107    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:08 GMT
	I0127 12:15:08.692107    8732 round_trippers.go:580]     Audit-Id: 10079004-faa6-4262-a2a0-220b75fa2cda
	I0127 12:15:08.692107    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:08.692107    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:08.692107    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:08.692107    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:08.692201    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"605","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0127 12:15:09.176698    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:09.176698    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:09.176698    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:09.176698    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:09.184065    8732 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 12:15:09.184065    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:09.184065    8732 round_trippers.go:580]     Content-Length: 4030
	I0127 12:15:09.184065    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:09 GMT
	I0127 12:15:09.184065    8732 round_trippers.go:580]     Audit-Id: 369dfaac-62cf-4eea-a50c-16a5ddcc4f3b
	I0127 12:15:09.184065    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:09.184065    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:09.184065    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:09.184065    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:09.184065    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"605","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0127 12:15:09.677221    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:09.677221    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:09.677295    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:09.677295    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:09.681197    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:15:09.681269    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:09.681269    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:09.681269    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:09.681269    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:09.681269    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:09.681269    8732 round_trippers.go:580]     Content-Length: 4030
	I0127 12:15:09.681269    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:09 GMT
	I0127 12:15:09.681269    8732 round_trippers.go:580]     Audit-Id: 539f9f0a-6a82-4837-ae7e-deb52247680e
	I0127 12:15:09.681269    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"605","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0127 12:15:10.176969    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:10.176969    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:10.176969    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:10.176969    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:10.180980    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:15:10.180980    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:10.181048    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:10.181048    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:10.181048    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:10.181048    8732 round_trippers.go:580]     Content-Length: 4030
	I0127 12:15:10.181048    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:10 GMT
	I0127 12:15:10.181048    8732 round_trippers.go:580]     Audit-Id: 1213df0a-226f-4529-8630-e0d18a148a62
	I0127 12:15:10.181048    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:10.181129    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"605","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0127 12:15:10.676860    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:10.676860    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:10.676860    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:10.676860    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:10.689819    8732 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0127 12:15:10.690363    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:10.690363    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:10.690363    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:10.690363    8732 round_trippers.go:580]     Content-Length: 4030
	I0127 12:15:10.690363    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:10 GMT
	I0127 12:15:10.690363    8732 round_trippers.go:580]     Audit-Id: 7dd6b19f-d4b0-4f1a-a4b0-54bca2c91e8a
	I0127 12:15:10.690363    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:10.690363    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:10.690602    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"605","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0127 12:15:10.690602    8732 node_ready.go:53] node "multinode-659000-m02" has status "Ready":"False"
	I0127 12:15:11.177729    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:11.177729    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:11.177840    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:11.177840    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:11.182258    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:15:11.182258    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:11.182258    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:11.182258    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:11.182258    8732 round_trippers.go:580]     Content-Length: 4030
	I0127 12:15:11.182258    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:11 GMT
	I0127 12:15:11.182368    8732 round_trippers.go:580]     Audit-Id: 84e64b99-8a2f-478e-933b-dbf39572f322
	I0127 12:15:11.182368    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:11.182368    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:11.182526    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"605","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0127 12:15:11.677122    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:11.677122    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:11.677122    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:11.677122    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:11.682831    8732 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:15:11.682831    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:11.682831    8732 round_trippers.go:580]     Audit-Id: a092b01f-4b37-454d-bd0b-91e2b6b215cc
	I0127 12:15:11.682831    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:11.682831    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:11.682831    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:11.682936    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:11.682936    8732 round_trippers.go:580]     Content-Length: 4030
	I0127 12:15:11.682936    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:11 GMT
	I0127 12:15:11.683077    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"605","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0127 12:15:12.177850    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:12.177850    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:12.177850    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:12.177850    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:12.182284    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:15:12.182349    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:12.182349    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:12.182349    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:12.182349    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:12.182349    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:12.182349    8732 round_trippers.go:580]     Content-Length: 4030
	I0127 12:15:12.182349    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:12 GMT
	I0127 12:15:12.182349    8732 round_trippers.go:580]     Audit-Id: 138fe314-e31e-4a6b-ba46-5d036f1b0315
	I0127 12:15:12.182349    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"605","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0127 12:15:12.676846    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:12.676846    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:12.676846    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:12.676846    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:12.681569    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:15:12.681676    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:12.681676    8732 round_trippers.go:580]     Audit-Id: 25f06325-8461-46fe-9e78-188e678d414c
	I0127 12:15:12.681676    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:12.681676    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:12.681676    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:12.681676    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:12.681676    8732 round_trippers.go:580]     Content-Length: 4030
	I0127 12:15:12.681748    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:12 GMT
	I0127 12:15:12.681748    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"605","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0127 12:15:13.178320    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:13.178320    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:13.178387    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:13.178387    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:13.182037    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:15:13.182037    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:13.182115    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:13.182115    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:13.182115    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:13.182115    8732 round_trippers.go:580]     Content-Length: 4030
	I0127 12:15:13.182115    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:13 GMT
	I0127 12:15:13.182115    8732 round_trippers.go:580]     Audit-Id: 2a6ecf59-f693-4106-ab9e-daed62e1bf88
	I0127 12:15:13.182115    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:13.182597    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"605","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0127 12:15:13.183129    8732 node_ready.go:53] node "multinode-659000-m02" has status "Ready":"False"
	I0127 12:15:13.676719    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:13.676719    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:13.676719    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:13.676719    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:13.682884    8732 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:15:13.682884    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:13.682884    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:13.682884    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:13.682884    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:13.682884    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:13.682884    8732 round_trippers.go:580]     Content-Length: 4030
	I0127 12:15:13.682884    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:13 GMT
	I0127 12:15:13.682884    8732 round_trippers.go:580]     Audit-Id: 7fa7344c-4cef-4e58-a0c3-944f69c43e29
	I0127 12:15:13.683674    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"605","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0127 12:15:14.176841    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:14.177472    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:14.177472    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:14.177472    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:14.181989    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:15:14.182928    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:14.182928    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:14.182928    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:14.182928    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:14.182928    8732 round_trippers.go:580]     Content-Length: 4030
	I0127 12:15:14.182928    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:14 GMT
	I0127 12:15:14.183014    8732 round_trippers.go:580]     Audit-Id: 1dda1f8d-b61e-4c04-bb03-e94e94443213
	I0127 12:15:14.183014    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:14.183147    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"605","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0127 12:15:14.677169    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:14.677169    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:14.677169    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:14.677169    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:14.695312    8732 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0127 12:15:14.695388    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:14.695452    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:14 GMT
	I0127 12:15:14.695452    8732 round_trippers.go:580]     Audit-Id: dae7552d-be79-4fe5-b2ad-73146101bfb1
	I0127 12:15:14.695452    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:14.695452    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:14.695452    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:14.695452    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:14.695527    8732 round_trippers.go:580]     Content-Length: 4030
	I0127 12:15:14.695682    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"605","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0127 12:15:15.177087    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:15.177087    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:15.177087    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:15.177087    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:15.183542    8732 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:15:15.183542    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:15.183542    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:15.183542    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:15.183542    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:15.183542    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:15.183542    8732 round_trippers.go:580]     Content-Length: 4030
	I0127 12:15:15.183542    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:15 GMT
	I0127 12:15:15.183542    8732 round_trippers.go:580]     Audit-Id: 792b09e2-0ff0-443e-97e3-ea61189d4c12
	I0127 12:15:15.183542    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"605","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0127 12:15:15.183542    8732 node_ready.go:53] node "multinode-659000-m02" has status "Ready":"False"
	I0127 12:15:15.676733    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:15.676733    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:15.676733    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:15.676733    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:15.682028    8732 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:15:15.682103    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:15.682103    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:15.682103    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:15.682103    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:15.682179    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:15.682179    8732 round_trippers.go:580]     Content-Length: 4030
	I0127 12:15:15.682179    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:15 GMT
	I0127 12:15:15.682179    8732 round_trippers.go:580]     Audit-Id: 82e0bbbb-78d8-4772-b12c-75978a59e09d
	I0127 12:15:15.682276    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"605","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0127 12:15:16.177322    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:16.177322    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:16.177322    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:16.177322    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:16.182532    8732 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:15:16.182625    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:16.182625    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:16.182625    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:16.182625    8732 round_trippers.go:580]     Content-Length: 4030
	I0127 12:15:16.182625    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:16 GMT
	I0127 12:15:16.182625    8732 round_trippers.go:580]     Audit-Id: 8568a659-9e0f-421c-a4f0-636deb90b821
	I0127 12:15:16.182625    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:16.182625    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:16.182835    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"605","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0127 12:15:16.677403    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:16.677403    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:16.677403    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:16.677403    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:16.682176    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:15:16.682253    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:16.682253    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:16.682253    8732 round_trippers.go:580]     Content-Length: 4030
	I0127 12:15:16.682253    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:16 GMT
	I0127 12:15:16.682253    8732 round_trippers.go:580]     Audit-Id: 3d098edc-1711-49e1-9ca4-eece6e591673
	I0127 12:15:16.682253    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:16.682253    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:16.682253    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:16.682552    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"605","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0127 12:15:17.177010    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:17.177064    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:17.177123    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:17.177123    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:17.180984    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:15:17.180984    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:17.180984    8732 round_trippers.go:580]     Audit-Id: e9995b5f-75e1-46ec-aa29-10907bb0a74b
	I0127 12:15:17.180984    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:17.180984    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:17.181095    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:17.181095    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:17.181095    8732 round_trippers.go:580]     Content-Length: 4030
	I0127 12:15:17.181095    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:17 GMT
	I0127 12:15:17.181146    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"605","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0127 12:15:17.676549    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:17.676549    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:17.676549    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:17.676549    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:17.681798    8732 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:15:17.681867    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:17.681867    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:17.681929    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:17.681929    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:17.681929    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:17.681929    8732 round_trippers.go:580]     Content-Length: 4030
	I0127 12:15:17.681929    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:17 GMT
	I0127 12:15:17.681929    8732 round_trippers.go:580]     Audit-Id: fbd650cc-934a-4f00-90e1-7464dc17b9b4
	I0127 12:15:17.682128    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"605","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0127 12:15:17.682568    8732 node_ready.go:53] node "multinode-659000-m02" has status "Ready":"False"
	I0127 12:15:18.177239    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:18.177239    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:18.177239    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:18.177239    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:18.245595    8732 round_trippers.go:574] Response Status: 200 OK in 68 milliseconds
	I0127 12:15:18.245595    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:18.245595    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:18.245595    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:18.245595    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:18.245595    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:18.245595    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:18 GMT
	I0127 12:15:18.245595    8732 round_trippers.go:580]     Audit-Id: b8738433-7be4-4fa3-8f3e-d21624ca66c4
	I0127 12:15:18.246573    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:18.677827    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:18.677827    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:18.677827    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:18.677827    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:18.711098    8732 round_trippers.go:574] Response Status: 200 OK in 33 milliseconds
	I0127 12:15:18.711098    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:18.711098    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:18.711098    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:18.711098    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:18 GMT
	I0127 12:15:18.711098    8732 round_trippers.go:580]     Audit-Id: 24c4026f-f447-42fe-a748-221f8e2b9f33
	I0127 12:15:18.711098    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:18.711098    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:18.711482    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:19.177308    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:19.177308    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:19.177308    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:19.177308    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:19.182372    8732 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:15:19.182372    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:19.182372    8732 round_trippers.go:580]     Audit-Id: 625ad251-5e28-431b-a3cb-67300d0caead
	I0127 12:15:19.182372    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:19.182569    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:19.182590    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:19.182590    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:19.182590    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:19 GMT
	I0127 12:15:19.182789    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:19.677151    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:19.677151    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:19.677151    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:19.677151    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:19.680629    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:15:19.680629    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:19.680739    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:19.680739    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:19.680739    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:19.680739    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:19 GMT
	I0127 12:15:19.680739    8732 round_trippers.go:580]     Audit-Id: 9f6019f4-d080-440d-9e00-bc609face6d2
	I0127 12:15:19.680739    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:19.680981    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:20.177861    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:20.177861    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:20.177861    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:20.177861    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:20.181877    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:15:20.181877    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:20.181877    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:20.181877    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:20 GMT
	I0127 12:15:20.181951    8732 round_trippers.go:580]     Audit-Id: e8585ac5-de0a-4468-b689-d2cc1605c209
	I0127 12:15:20.181951    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:20.181951    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:20.181951    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:20.181951    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:20.182849    8732 node_ready.go:53] node "multinode-659000-m02" has status "Ready":"False"
	I0127 12:15:20.677619    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:20.677619    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:20.677728    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:20.677728    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:20.680872    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:15:20.681943    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:20.681943    8732 round_trippers.go:580]     Audit-Id: 14bb3a74-ccd1-4c8a-bd3b-dae151cc0782
	I0127 12:15:20.681943    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:20.681943    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:20.681943    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:20.681943    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:20.681943    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:20 GMT
	I0127 12:15:20.682407    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:21.177689    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:21.177768    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:21.177768    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:21.177768    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:21.181988    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:15:21.182028    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:21.182028    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:21 GMT
	I0127 12:15:21.182028    8732 round_trippers.go:580]     Audit-Id: 74c2282a-8bfc-407d-ab8b-ae2ec0d57d6a
	I0127 12:15:21.182028    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:21.182028    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:21.182028    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:21.182028    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:21.182335    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:21.676734    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:21.677256    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:21.677256    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:21.677256    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:21.680964    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:15:21.680964    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:21.681071    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:21.681071    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:21.681071    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:21 GMT
	I0127 12:15:21.681071    8732 round_trippers.go:580]     Audit-Id: 8dc670e8-635c-41b8-83df-f3ffbf22d990
	I0127 12:15:21.681071    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:21.681071    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:21.681431    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:22.177990    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:22.178075    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:22.178075    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:22.178075    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:22.181410    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:15:22.181410    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:22.181410    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:22.181410    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:22 GMT
	I0127 12:15:22.181410    8732 round_trippers.go:580]     Audit-Id: deedf1a2-818f-4452-803e-1cc4cb05133a
	I0127 12:15:22.181530    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:22.181530    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:22.181530    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:22.181800    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:22.677360    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:22.677431    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:22.677431    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:22.677496    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:22.681728    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:15:22.681887    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:22.681887    8732 round_trippers.go:580]     Audit-Id: 4e0201d8-9580-495f-95b2-b8ef767eb223
	I0127 12:15:22.681887    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:22.681887    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:22.681887    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:22.681887    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:22.681887    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:22 GMT
	I0127 12:15:22.682137    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:22.682863    8732 node_ready.go:53] node "multinode-659000-m02" has status "Ready":"False"
	I0127 12:15:23.177392    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:23.177498    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:23.177562    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:23.177562    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:23.182828    8732 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:15:23.182828    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:23.182828    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:23.182828    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:23 GMT
	I0127 12:15:23.182828    8732 round_trippers.go:580]     Audit-Id: da1824d7-a643-42b0-b38a-9c1f96735468
	I0127 12:15:23.182828    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:23.182828    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:23.182828    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:23.182828    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:23.677306    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:23.677389    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:23.677389    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:23.677389    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:23.681629    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:15:23.681712    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:23.681712    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:23.681712    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:23 GMT
	I0127 12:15:23.681712    8732 round_trippers.go:580]     Audit-Id: 9ac7aded-6154-4bb0-9c1b-ab651e4621f9
	I0127 12:15:23.681712    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:23.681712    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:23.681712    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:23.681963    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:24.177334    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:24.177470    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:24.177541    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:24.177541    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:24.185879    8732 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0127 12:15:24.186037    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:24.186037    8732 round_trippers.go:580]     Audit-Id: 92f04e7f-ee83-400f-852e-a34b00bd1c30
	I0127 12:15:24.186037    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:24.186037    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:24.186037    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:24.186037    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:24.186037    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:24 GMT
	I0127 12:15:24.186037    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:24.677037    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:24.677037    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:24.677037    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:24.677037    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:24.681914    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:15:24.682036    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:24.682036    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:24 GMT
	I0127 12:15:24.682036    8732 round_trippers.go:580]     Audit-Id: c8c41008-64bc-49e2-97f8-252eb36269ab
	I0127 12:15:24.682036    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:24.682036    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:24.682036    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:24.682036    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:24.682254    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:25.177001    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:25.177001    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:25.177001    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:25.177001    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:25.181078    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:15:25.181549    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:25.181549    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:25 GMT
	I0127 12:15:25.181549    8732 round_trippers.go:580]     Audit-Id: 6987e361-7401-4f5e-95e6-6b518c209ad4
	I0127 12:15:25.181549    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:25.181549    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:25.181549    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:25.181549    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:25.181653    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:25.182588    8732 node_ready.go:53] node "multinode-659000-m02" has status "Ready":"False"
	I0127 12:15:25.676572    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:25.676572    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:25.676572    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:25.676572    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:25.680945    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:15:25.680945    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:25.681062    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:25.681062    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:25.681062    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:25.681062    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:25.681062    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:25 GMT
	I0127 12:15:25.681062    8732 round_trippers.go:580]     Audit-Id: c8f0a7dd-5725-4213-8628-6d4805595576
	I0127 12:15:25.681467    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:26.177502    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:26.177502    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:26.177502    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:26.177502    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:26.181437    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:15:26.181437    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:26.181437    8732 round_trippers.go:580]     Audit-Id: a6b73e90-27ad-4a5c-98eb-70ee571fff69
	I0127 12:15:26.181437    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:26.181437    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:26.181437    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:26.181530    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:26.181530    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:26 GMT
	I0127 12:15:26.181752    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:26.677166    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:26.677166    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:26.677166    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:26.677166    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:26.684407    8732 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 12:15:26.684494    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:26.684494    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:26 GMT
	I0127 12:15:26.684494    8732 round_trippers.go:580]     Audit-Id: 2b881046-7888-4db4-96a0-ab8bd31e63cd
	I0127 12:15:26.684494    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:26.684494    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:26.684494    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:26.684494    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:26.685123    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:27.177595    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:27.177671    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:27.177671    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:27.177671    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:27.183018    8732 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:15:27.183069    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:27.183114    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:27.183114    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:27.183114    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:27 GMT
	I0127 12:15:27.183114    8732 round_trippers.go:580]     Audit-Id: f0fcea80-8f9f-4ebd-9734-000c9965e4e1
	I0127 12:15:27.183114    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:27.183114    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:27.183438    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:27.184242    8732 node_ready.go:53] node "multinode-659000-m02" has status "Ready":"False"
	I0127 12:15:27.677327    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:27.677327    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:27.677478    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:27.677478    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:27.681261    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:15:27.681261    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:27.681261    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:27.681362    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:27.681362    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:27.681362    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:27 GMT
	I0127 12:15:27.681362    8732 round_trippers.go:580]     Audit-Id: a5f76524-f1c5-4bd1-b548-d502cd2c6f9a
	I0127 12:15:27.681362    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:27.681555    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:28.177996    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:28.178069    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:28.178069    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:28.178069    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:28.181144    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:15:28.181208    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:28.181208    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:28 GMT
	I0127 12:15:28.181208    8732 round_trippers.go:580]     Audit-Id: 03449bfc-71ee-4ac3-a847-c875ad82aa5d
	I0127 12:15:28.181208    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:28.181208    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:28.181208    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:28.181208    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:28.181687    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:28.677681    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:28.677818    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:28.677818    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:28.677818    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:28.684685    8732 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:15:28.684685    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:28.684685    8732 round_trippers.go:580]     Audit-Id: bbe42945-11d8-415e-b660-208c1ff9f708
	I0127 12:15:28.684685    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:28.684685    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:28.684685    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:28.684685    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:28.684685    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:28 GMT
	I0127 12:15:28.684685    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:29.177480    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:29.177480    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:29.177480    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:29.177480    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:29.181801    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:15:29.181890    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:29.181890    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:29.181890    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:29.181890    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:29 GMT
	I0127 12:15:29.181890    8732 round_trippers.go:580]     Audit-Id: 6376ef62-c6c8-4086-bc34-80b2299c09f3
	I0127 12:15:29.181890    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:29.181890    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:29.182429    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:29.677496    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:29.677496    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:29.677496    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:29.677496    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:29.682067    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:15:29.682186    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:29.682186    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:29.682186    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:29.682186    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:29 GMT
	I0127 12:15:29.682186    8732 round_trippers.go:580]     Audit-Id: d4e886e4-45ed-44ae-9b4f-66fd23295294
	I0127 12:15:29.682186    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:29.682186    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:29.682500    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:29.683341    8732 node_ready.go:53] node "multinode-659000-m02" has status "Ready":"False"
	I0127 12:15:30.177558    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:30.177558    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:30.177558    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:30.177558    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:30.183057    8732 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:15:30.183057    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:30.183057    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:30.183057    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:30 GMT
	I0127 12:15:30.183057    8732 round_trippers.go:580]     Audit-Id: 3d099f58-b1e4-4e32-8d1f-abe0c5b8807f
	I0127 12:15:30.183162    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:30.183162    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:30.183162    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:30.188772    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:30.677936    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:30.678006    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:30.678006    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:30.678006    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:30.682416    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:15:30.682488    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:30.682488    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:30 GMT
	I0127 12:15:30.682488    8732 round_trippers.go:580]     Audit-Id: 9df7d780-a81f-4a5e-98b3-562228a08064
	I0127 12:15:30.682488    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:30.682488    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:30.682488    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:30.682488    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:30.682827    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:31.177736    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:31.177736    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:31.177736    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:31.177736    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:31.181785    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:15:31.181785    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:31.181897    8732 round_trippers.go:580]     Audit-Id: 3ac26314-724a-4166-8857-870e5bff08e1
	I0127 12:15:31.181897    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:31.181897    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:31.181897    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:31.181897    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:31.181897    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:31 GMT
	I0127 12:15:31.182134    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:31.677706    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:31.677706    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:31.677706    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:31.677706    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:31.682262    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:15:31.682262    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:31.682262    8732 round_trippers.go:580]     Audit-Id: 34fa6f24-5c86-4312-a4be-b90bdd6e1230
	I0127 12:15:31.682262    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:31.682355    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:31.682355    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:31.682355    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:31.682355    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:31 GMT
	I0127 12:15:31.682721    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:32.178138    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:32.178138    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:32.178138    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:32.178138    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:32.183518    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:15:32.183518    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:32.183518    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:32 GMT
	I0127 12:15:32.183518    8732 round_trippers.go:580]     Audit-Id: 3563ce0e-cebb-4f33-bf4a-fc875aad65bd
	I0127 12:15:32.183518    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:32.183518    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:32.183518    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:32.183518    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:32.183806    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:32.183978    8732 node_ready.go:53] node "multinode-659000-m02" has status "Ready":"False"
	I0127 12:15:32.676886    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:32.676886    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:32.676886    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:32.676886    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:32.681847    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:15:32.681847    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:32.681847    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:32.681847    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:32.681847    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:32 GMT
	I0127 12:15:32.681847    8732 round_trippers.go:580]     Audit-Id: 73e039ca-278d-4357-ba06-d7a5351d11e7
	I0127 12:15:32.681847    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:32.681847    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:32.682672    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:33.177185    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:33.177185    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:33.177185    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:33.177185    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:33.181733    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:15:33.181822    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:33.181822    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:33.181822    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:33.181822    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:33.181822    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:33 GMT
	I0127 12:15:33.181822    8732 round_trippers.go:580]     Audit-Id: 0d9a5c5d-9ec3-4432-a8ce-dc481c59ee89
	I0127 12:15:33.181822    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:33.181991    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:33.677529    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:33.677529    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:33.677529    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:33.677529    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:33.680170    8732 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:15:33.680893    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:33.680893    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:33.680893    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:33.680893    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:33 GMT
	I0127 12:15:33.680893    8732 round_trippers.go:580]     Audit-Id: bd956b8e-58d6-4fe2-a85d-1a53487264f6
	I0127 12:15:33.680893    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:33.680893    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:33.681166    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:34.177239    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:34.177239    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:34.177239    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:34.177239    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:34.183503    8732 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:15:34.183503    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:34.183503    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:34.183503    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:34.183503    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:34.183503    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:34.183503    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:34 GMT
	I0127 12:15:34.183503    8732 round_trippers.go:580]     Audit-Id: 37910e7c-d1c3-475c-87bd-aaed3c7482b8
	I0127 12:15:34.183503    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:34.184607    8732 node_ready.go:53] node "multinode-659000-m02" has status "Ready":"False"
	I0127 12:15:34.677576    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:34.677576    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:34.677576    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:34.678432    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:34.683215    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:15:34.683215    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:34.683215    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:34.683313    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:34.683313    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:34 GMT
	I0127 12:15:34.683313    8732 round_trippers.go:580]     Audit-Id: a03353d4-b8f5-4dae-b5cd-31b391c336c7
	I0127 12:15:34.683313    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:34.683313    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:34.683529    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:35.176813    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:35.176813    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:35.176813    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:35.176813    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:35.180901    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:15:35.181447    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:35.181447    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:35.181447    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:35 GMT
	I0127 12:15:35.181447    8732 round_trippers.go:580]     Audit-Id: 6bddeb9c-71d2-4755-b1c6-0e7fd24cc885
	I0127 12:15:35.181447    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:35.181447    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:35.181447    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:35.181689    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:35.677316    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:35.677316    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:35.677316    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:35.677316    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:35.681709    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:15:35.681709    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:35.681709    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:35 GMT
	I0127 12:15:35.681709    8732 round_trippers.go:580]     Audit-Id: f0e4093d-77d6-48ef-ab3b-1c7f063bfed4
	I0127 12:15:35.681709    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:35.681709    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:35.681709    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:35.681709    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:35.682219    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:36.177758    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:36.177758    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:36.177758    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:36.177758    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:36.184741    8732 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:15:36.184867    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:36.184867    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:36.184867    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:36.184867    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:36.184867    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:36 GMT
	I0127 12:15:36.184867    8732 round_trippers.go:580]     Audit-Id: ac71f4d8-fc18-4163-a665-744965fa072b
	I0127 12:15:36.184867    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:36.185193    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:36.185193    8732 node_ready.go:53] node "multinode-659000-m02" has status "Ready":"False"
	I0127 12:15:36.676899    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:36.676899    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:36.676899    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:36.676899    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:36.681696    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:15:36.681696    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:36.681696    8732 round_trippers.go:580]     Audit-Id: c4ddb94f-a7dc-4411-b4cb-d630e93b3f46
	I0127 12:15:36.681696    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:36.681696    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:36.681696    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:36.681805    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:36.681805    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:36 GMT
	I0127 12:15:36.682117    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"619","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0127 12:15:37.177915    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:37.177915    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:37.177915    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:37.178048    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:37.182857    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:15:37.182939    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:37.182939    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:37.182939    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:37.182939    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:37 GMT
	I0127 12:15:37.182939    8732 round_trippers.go:580]     Audit-Id: 36f6099d-ebb5-48e0-b441-46a61fcf20d6
	I0127 12:15:37.182939    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:37.182992    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:37.184010    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"649","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3264 chars]
	I0127 12:15:37.184453    8732 node_ready.go:49] node "multinode-659000-m02" has status "Ready":"True"
	I0127 12:15:37.184453    8732 node_ready.go:38] duration metric: took 28.5078796s for node "multinode-659000-m02" to be "Ready" ...
	I0127 12:15:37.184538    8732 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:15:37.184694    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/namespaces/kube-system/pods
	I0127 12:15:37.184711    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:37.184711    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:37.184711    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:37.189303    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:15:37.189303    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:37.189303    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:37.189303    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:37.189303    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:37 GMT
	I0127 12:15:37.189392    8732 round_trippers.go:580]     Audit-Id: 9a22cbfa-2c05-486e-8c66-dca7c54716ac
	I0127 12:15:37.189392    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:37.189392    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:37.193674    8732 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"650"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"442","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 73002 chars]
	I0127 12:15:37.197030    8732 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace to be "Ready" ...
	I0127 12:15:37.197258    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:15:37.197258    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:37.197258    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:37.197352    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:37.200809    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:15:37.200864    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:37.200907    8732 round_trippers.go:580]     Audit-Id: 0ac12369-8e66-45b1-8aa1-fbc63584f37a
	I0127 12:15:37.200907    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:37.200907    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:37.200907    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:37.200932    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:37.200932    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:37 GMT
	I0127 12:15:37.200932    8732 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"442","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6834 chars]
	I0127 12:15:37.202128    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:15:37.202128    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:37.202128    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:37.202128    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:37.205283    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:15:37.205308    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:37.205308    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:37.205308    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:37.205308    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:37.205308    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:37 GMT
	I0127 12:15:37.205308    8732 round_trippers.go:580]     Audit-Id: e68f3e44-a6fb-4bbd-9032-0998cab8178a
	I0127 12:15:37.205308    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:37.205856    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"451","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0127 12:15:37.206367    8732 pod_ready.go:93] pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace has status "Ready":"True"
	I0127 12:15:37.206367    8732 pod_ready.go:82] duration metric: took 9.2937ms for pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace to be "Ready" ...
	I0127 12:15:37.206367    8732 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:15:37.206367    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-659000
	I0127 12:15:37.206367    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:37.206367    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:37.206367    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:37.209765    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:15:37.209765    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:37.209765    8732 round_trippers.go:580]     Audit-Id: 9648edbf-0054-4e2c-a8cb-51ba7aa7664a
	I0127 12:15:37.209765    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:37.209765    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:37.209765    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:37.209765    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:37.209765    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:37 GMT
	I0127 12:15:37.209765    8732 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-659000","namespace":"kube-system","uid":"d2a9c448-86a1-48e3-8b48-345c937e5bb4","resourceVersion":"391","creationTimestamp":"2025-01-27T12:11:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.29.204.17:2379","kubernetes.io/config.hash":"7291ea72d8be6e47ed8b536906d73549","kubernetes.io/config.mirror":"7291ea72d8be6e47ed8b536906d73549","kubernetes.io/config.seen":"2025-01-27T12:11:59.106493267Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:11:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6465 chars]
	I0127 12:15:37.210545    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:15:37.210545    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:37.210545    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:37.210545    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:37.213941    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:15:37.214792    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:37.214792    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:37.214792    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:37.214792    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:37 GMT
	I0127 12:15:37.214872    8732 round_trippers.go:580]     Audit-Id: 0990b2d1-a8f2-4622-808d-ad2c91bcf71d
	I0127 12:15:37.214872    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:37.214872    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:37.215003    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"451","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0127 12:15:37.215003    8732 pod_ready.go:93] pod "etcd-multinode-659000" in "kube-system" namespace has status "Ready":"True"
	I0127 12:15:37.215003    8732 pod_ready.go:82] duration metric: took 8.6363ms for pod "etcd-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:15:37.215003    8732 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:15:37.215003    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-659000
	I0127 12:15:37.215003    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:37.215003    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:37.215003    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:37.219935    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:15:37.219935    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:37.219935    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:37.219935    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:37.219935    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:37 GMT
	I0127 12:15:37.219935    8732 round_trippers.go:580]     Audit-Id: 479467b9-e26c-4315-b6cb-1615400b69f2
	I0127 12:15:37.219935    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:37.219935    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:37.219935    8732 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-659000","namespace":"kube-system","uid":"f19e9efc-57cc-4e2a-b365-920592a7f352","resourceVersion":"397","creationTimestamp":"2025-01-27T12:11:56Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.29.204.17:8443","kubernetes.io/config.hash":"6bf31ca1befb4fb3e8f2fd27458a3b80","kubernetes.io/config.mirror":"6bf31ca1befb4fb3e8f2fd27458a3b80","kubernetes.io/config.seen":"2025-01-27T12:11:51.419792725Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:11:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0127 12:15:37.220938    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:15:37.220938    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:37.220938    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:37.220938    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:37.224467    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:15:37.225467    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:37.225512    8732 round_trippers.go:580]     Audit-Id: f4e2b7c7-8df9-4b5b-ad3e-a8c6b914acf4
	I0127 12:15:37.225512    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:37.225512    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:37.225512    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:37.225512    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:37.225512    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:37 GMT
	I0127 12:15:37.225792    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"451","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0127 12:15:37.226217    8732 pod_ready.go:93] pod "kube-apiserver-multinode-659000" in "kube-system" namespace has status "Ready":"True"
	I0127 12:15:37.226269    8732 pod_ready.go:82] duration metric: took 11.2657ms for pod "kube-apiserver-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:15:37.226269    8732 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:15:37.226366    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-659000
	I0127 12:15:37.226445    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:37.226445    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:37.226470    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:37.230530    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:15:37.230530    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:37.230530    8732 round_trippers.go:580]     Audit-Id: e578bd38-0c87-4f6c-8341-6f76858a7923
	I0127 12:15:37.230530    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:37.230530    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:37.230530    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:37.230619    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:37.230619    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:37 GMT
	I0127 12:15:37.230983    8732 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-659000","namespace":"kube-system","uid":"8be02f36-161c-44f3-b526-56db3b8a007a","resourceVersion":"401","creationTimestamp":"2025-01-27T12:11:59Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4a14d0700eafa36dd3913955f2c0f839","kubernetes.io/config.mirror":"4a14d0700eafa36dd3913955f2c0f839","kubernetes.io/config.seen":"2025-01-27T12:11:59.106472767Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:11:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0127 12:15:37.231038    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:15:37.231038    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:37.231038    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:37.231038    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:37.235491    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:15:37.235491    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:37.235491    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:37 GMT
	I0127 12:15:37.235491    8732 round_trippers.go:580]     Audit-Id: 8f9915f7-46d1-4f07-84f4-fa8ed549d5a8
	I0127 12:15:37.235491    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:37.235491    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:37.235491    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:37.235491    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:37.235740    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"451","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0127 12:15:37.235740    8732 pod_ready.go:93] pod "kube-controller-manager-multinode-659000" in "kube-system" namespace has status "Ready":"True"
	I0127 12:15:37.235740    8732 pod_ready.go:82] duration metric: took 9.4705ms for pod "kube-controller-manager-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:15:37.235740    8732 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pjhc8" in "kube-system" namespace to be "Ready" ...
	I0127 12:15:37.378501    8732 request.go:632] Waited for 142.232ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.204.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pjhc8
	I0127 12:15:37.378835    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pjhc8
	I0127 12:15:37.378835    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:37.378835    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:37.378835    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:37.381699    8732 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:15:37.381699    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:37.381699    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:37 GMT
	I0127 12:15:37.381699    8732 round_trippers.go:580]     Audit-Id: 83a148f3-4606-49d5-9992-56d596f95b1c
	I0127 12:15:37.381699    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:37.381699    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:37.381699    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:37.381699    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:37.382478    8732 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pjhc8","generateName":"kube-proxy-","namespace":"kube-system","uid":"ddb6698c-b83d-4a49-9672-c894e87cbb66","resourceVersion":"626","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d88eb776-b464-4f2b-8140-44249610a7fa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d88eb776-b464-4f2b-8140-44249610a7fa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6207 chars]
	I0127 12:15:37.578771    8732 request.go:632] Waited for 196.1344ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:37.579270    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:15:37.579330    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:37.579330    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:37.579370    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:37.585027    8732 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:15:37.585027    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:37.585027    8732 round_trippers.go:580]     Audit-Id: 7dd78d82-f6f0-4e99-916a-475efb60e875
	I0127 12:15:37.585027    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:37.585027    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:37.585027    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:37.585027    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:37.585027    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:37 GMT
	I0127 12:15:37.585314    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"649","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3264 chars]
	I0127 12:15:37.585531    8732 pod_ready.go:93] pod "kube-proxy-pjhc8" in "kube-system" namespace has status "Ready":"True"
	I0127 12:15:37.585531    8732 pod_ready.go:82] duration metric: took 349.7875ms for pod "kube-proxy-pjhc8" in "kube-system" namespace to be "Ready" ...
	I0127 12:15:37.585531    8732 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s46mv" in "kube-system" namespace to be "Ready" ...
	I0127 12:15:37.777592    8732 request.go:632] Waited for 192.0589ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.204.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s46mv
	I0127 12:15:37.777592    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s46mv
	I0127 12:15:37.777946    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:37.777946    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:37.777946    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:37.781014    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:15:37.781014    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:37.781014    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:37 GMT
	I0127 12:15:37.781014    8732 round_trippers.go:580]     Audit-Id: 8b9e257c-796f-4f30-bbf8-12d68ac1f84d
	I0127 12:15:37.781014    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:37.781014    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:37.781014    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:37.781014    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:37.781014    8732 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s46mv","generateName":"kube-proxy-","namespace":"kube-system","uid":"ae3b8daf-d674-4cfe-8652-cb5ff6ba8615","resourceVersion":"392","creationTimestamp":"2025-01-27T12:12:03Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d88eb776-b464-4f2b-8140-44249610a7fa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d88eb776-b464-4f2b-8140-44249610a7fa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6194 chars]
	I0127 12:15:37.978409    8732 request.go:632] Waited for 196.3259ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:15:37.978881    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:15:37.978961    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:37.978961    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:37.978961    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:37.982316    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:15:37.982316    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:37.982316    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:37 GMT
	I0127 12:15:37.982316    8732 round_trippers.go:580]     Audit-Id: 34315d87-3579-4ed1-a73e-33bbb46da07a
	I0127 12:15:37.982414    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:37.982414    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:37.982414    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:37.982414    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:37.982742    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"451","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0127 12:15:37.983799    8732 pod_ready.go:93] pod "kube-proxy-s46mv" in "kube-system" namespace has status "Ready":"True"
	I0127 12:15:37.983799    8732 pod_ready.go:82] duration metric: took 398.2634ms for pod "kube-proxy-s46mv" in "kube-system" namespace to be "Ready" ...
	I0127 12:15:37.983799    8732 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:15:38.177895    8732 request.go:632] Waited for 193.9954ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.204.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-659000
	I0127 12:15:38.178662    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-659000
	I0127 12:15:38.178706    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:38.178706    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:38.178706    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:38.182989    8732 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:15:38.182989    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:38.182989    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:38.182989    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:38.182989    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:38 GMT
	I0127 12:15:38.183180    8732 round_trippers.go:580]     Audit-Id: 7bbe0ba6-39f4-43c8-a35f-cbe832d3cb89
	I0127 12:15:38.183180    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:38.183180    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:38.183428    8732 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-659000","namespace":"kube-system","uid":"52b91964-a331-4925-9e07-c8df32b4176d","resourceVersion":"403","creationTimestamp":"2025-01-27T12:11:58Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e6c90fc43fa6c0754218ff1c4162045d","kubernetes.io/config.mirror":"e6c90fc43fa6c0754218ff1c4162045d","kubernetes.io/config.seen":"2025-01-27T12:11:51.419790825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:11:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5357 chars]
	I0127 12:15:38.378447    8732 request.go:632] Waited for 194.3264ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:15:38.378447    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes/multinode-659000
	I0127 12:15:38.378447    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:38.378447    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:38.378447    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:38.382401    8732 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:15:38.382669    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:38.382669    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:38.382669    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:38.382669    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:38 GMT
	I0127 12:15:38.382669    8732 round_trippers.go:580]     Audit-Id: 40c02cb6-bb91-4213-9b36-e2eb8494f21d
	I0127 12:15:38.382669    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:38.382669    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:38.382978    8732 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"451","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0127 12:15:38.384336    8732 pod_ready.go:93] pod "kube-scheduler-multinode-659000" in "kube-system" namespace has status "Ready":"True"
	I0127 12:15:38.384336    8732 pod_ready.go:82] duration metric: took 400.434ms for pod "kube-scheduler-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:15:38.384400    8732 pod_ready.go:39] duration metric: took 1.1998496s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:15:38.384489    8732 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 12:15:38.394431    8732 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:15:38.416538    8732 system_svc.go:56] duration metric: took 32.0479ms WaitForService to wait for kubelet
	I0127 12:15:38.417534    8732 kubeadm.go:582] duration metric: took 30.0048957s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:15:38.417603    8732 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:15:38.578060    8732 request.go:632] Waited for 160.455ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.204.17:8443/api/v1/nodes
	I0127 12:15:38.578524    8732 round_trippers.go:463] GET https://172.29.204.17:8443/api/v1/nodes
	I0127 12:15:38.578589    8732 round_trippers.go:469] Request Headers:
	I0127 12:15:38.578589    8732 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:15:38.578589    8732 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:15:38.585367    8732 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:15:38.585448    8732 round_trippers.go:577] Response Headers:
	I0127 12:15:38.585448    8732 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:15:38.585448    8732 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:15:38.585509    8732 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:15:38 GMT
	I0127 12:15:38.585509    8732 round_trippers.go:580]     Audit-Id: 2c46f7e2-65c8-4d0f-ab4c-4ee0275d4e03
	I0127 12:15:38.585509    8732 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:15:38.585509    8732 round_trippers.go:580]     Content-Type: application/json
	I0127 12:15:38.585726    8732 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"654"},"items":[{"metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"451","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9661 chars]
	I0127 12:15:38.587056    8732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:15:38.587128    8732 node_conditions.go:123] node cpu capacity is 2
	I0127 12:15:38.587185    8732 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:15:38.587185    8732 node_conditions.go:123] node cpu capacity is 2
	I0127 12:15:38.587230    8732 node_conditions.go:105] duration metric: took 169.6256ms to run NodePressure ...
	I0127 12:15:38.587230    8732 start.go:241] waiting for startup goroutines ...
	I0127 12:15:38.587294    8732 start.go:255] writing updated cluster config ...
	I0127 12:15:38.598737    8732 ssh_runner.go:195] Run: rm -f paused
	I0127 12:15:38.754615    8732 start.go:600] kubectl: 1.32.1, cluster: 1.32.1 (minor skew: 0)
	I0127 12:15:38.758745    8732 out.go:177] * Done! kubectl is now configured to use "multinode-659000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jan 27 12:12:26 multinode-659000 dockerd[1456]: time="2025-01-27T12:12:26.675304771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 12:12:26 multinode-659000 dockerd[1456]: time="2025-01-27T12:12:26.694537232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 27 12:12:26 multinode-659000 dockerd[1456]: time="2025-01-27T12:12:26.695508535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 27 12:12:26 multinode-659000 dockerd[1456]: time="2025-01-27T12:12:26.695607535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 12:12:26 multinode-659000 dockerd[1456]: time="2025-01-27T12:12:26.697485541Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 12:12:26 multinode-659000 cri-dockerd[1348]: time="2025-01-27T12:12:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bc9ef8ee86ec2e354006c4c56f82fe9ec4df472096628ad620faba06fa0b1ff8/resolv.conf as [nameserver 172.29.192.1]"
	Jan 27 12:12:26 multinode-659000 cri-dockerd[1348]: time="2025-01-27T12:12:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4a53e133a1cd6ab9514cb15ac3c4f1d5683d17008b482cebb08bf4809e060709/resolv.conf as [nameserver 172.29.192.1]"
	Jan 27 12:12:27 multinode-659000 dockerd[1456]: time="2025-01-27T12:12:27.096622355Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 27 12:12:27 multinode-659000 dockerd[1456]: time="2025-01-27T12:12:27.096900964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 27 12:12:27 multinode-659000 dockerd[1456]: time="2025-01-27T12:12:27.097040168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 12:12:27 multinode-659000 dockerd[1456]: time="2025-01-27T12:12:27.097317777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 12:12:27 multinode-659000 dockerd[1456]: time="2025-01-27T12:12:27.202289245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 27 12:12:27 multinode-659000 dockerd[1456]: time="2025-01-27T12:12:27.202420749Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 27 12:12:27 multinode-659000 dockerd[1456]: time="2025-01-27T12:12:27.202515152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 12:12:27 multinode-659000 dockerd[1456]: time="2025-01-27T12:12:27.202674757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 12:16:03 multinode-659000 dockerd[1456]: time="2025-01-27T12:16:03.213283031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 27 12:16:03 multinode-659000 dockerd[1456]: time="2025-01-27T12:16:03.213366631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 27 12:16:03 multinode-659000 dockerd[1456]: time="2025-01-27T12:16:03.213385831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 12:16:03 multinode-659000 dockerd[1456]: time="2025-01-27T12:16:03.213625333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 12:16:03 multinode-659000 cri-dockerd[1348]: time="2025-01-27T12:16:03Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4c82c0ec4aeaa9b21462a8248326ae982d6f7a0aee31347f1a58d216f0335177/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jan 27 12:16:05 multinode-659000 cri-dockerd[1348]: time="2025-01-27T12:16:05Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Jan 27 12:16:05 multinode-659000 dockerd[1456]: time="2025-01-27T12:16:05.398151836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 27 12:16:05 multinode-659000 dockerd[1456]: time="2025-01-27T12:16:05.398385139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 27 12:16:05 multinode-659000 dockerd[1456]: time="2025-01-27T12:16:05.398405039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 12:16:05 multinode-659000 dockerd[1456]: time="2025-01-27T12:16:05.398541540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	998a64b2baa2d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   48 seconds ago      Running             busybox                   0                   4c82c0ec4aeaa       busybox-58667487b6-2jq9j
	f818dd15d8b02       c69fa2e9cbf5f                                                                                         4 minutes ago       Running             coredns                   0                   4a53e133a1cd6       coredns-668d6bf9bc-2qw6w
	134620caeeb93       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   bc9ef8ee86ec2       storage-provisioner
	d758000dda95d       kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108              4 minutes ago       Running             kindnet-cni               0                   f2d0bd65fe50d       kindnet-z2hqq
	bbec7ccef7da5       e29f9c7391fd9                                                                                         4 minutes ago       Running             kube-proxy                0                   319cddeebceb6       kube-proxy-s46mv
	a16e06a038601       2b0d6572d062c                                                                                         5 minutes ago       Running             kube-scheduler            0                   5423fc5113290       kube-scheduler-multinode-659000
	e07a66f8f6196       019ee182b58e2                                                                                         5 minutes ago       Running             kube-controller-manager   0                   1bd5bf99bede3       kube-controller-manager-multinode-659000
	5f274e5a8851d       a9e7e6b294baf                                                                                         5 minutes ago       Running             etcd                      0                   51ee4649b24aa       etcd-multinode-659000
	f91e9c2d3ba64       95c0bda56fc4d                                                                                         5 minutes ago       Running             kube-apiserver            0                   1b522c4c9f4c7       kube-apiserver-multinode-659000
	
	
	==> coredns [f818dd15d8b0] <==
	[INFO] 10.244.0.3:34935 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092101s
	[INFO] 10.244.1.2:54822 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155102s
	[INFO] 10.244.1.2:50877 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000188102s
	[INFO] 10.244.1.2:45384 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183802s
	[INFO] 10.244.1.2:35073 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000227202s
	[INFO] 10.244.1.2:50517 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000061101s
	[INFO] 10.244.1.2:37353 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130501s
	[INFO] 10.244.1.2:42117 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114301s
	[INFO] 10.244.1.2:46171 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060401s
	[INFO] 10.244.0.3:55282 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117601s
	[INFO] 10.244.0.3:41761 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162301s
	[INFO] 10.244.0.3:35358 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000218902s
	[INFO] 10.244.0.3:50342 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124402s
	[INFO] 10.244.1.2:38159 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159602s
	[INFO] 10.244.1.2:37043 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000171002s
	[INFO] 10.244.1.2:50762 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000168301s
	[INFO] 10.244.1.2:33014 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000603s
	[INFO] 10.244.0.3:34941 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134301s
	[INFO] 10.244.0.3:60117 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000393904s
	[INFO] 10.244.0.3:47506 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000214402s
	[INFO] 10.244.0.3:42968 - 5 "PTR IN 1.192.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000443604s
	[INFO] 10.244.1.2:52260 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193802s
	[INFO] 10.244.1.2:40492 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000310903s
	[INFO] 10.244.1.2:50341 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074s
	[INFO] 10.244.1.2:41676 - 5 "PTR IN 1.192.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000637s
	
	
	==> describe nodes <==
	Name:               multinode-659000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-659000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	                    minikube.k8s.io/name=multinode-659000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T12_12_00_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 12:11:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-659000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 12:16:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 12:16:33 +0000   Mon, 27 Jan 2025 12:11:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 12:16:33 +0000   Mon, 27 Jan 2025 12:11:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 12:16:33 +0000   Mon, 27 Jan 2025 12:11:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 12:16:33 +0000   Mon, 27 Jan 2025 12:12:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.204.17
	  Hostname:    multinode-659000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 75ef1ba9d8794f609aed5dee0d0693ea
	  System UUID:                be6234aa-9e29-bb41-8165-59b265a4d7d0
	  Boot ID:                    2d7fc9df-6335-47ce-98e9-f27803e4ffcf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-2jq9j                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 coredns-668d6bf9bc-2qw6w                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m49s
	  kube-system                 etcd-multinode-659000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m54s
	  kube-system                 kindnet-z2hqq                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m50s
	  kube-system                 kube-apiserver-multinode-659000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 kube-controller-manager-multinode-659000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 kube-proxy-s46mv                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 kube-scheduler-multinode-659000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m47s                kube-proxy       
	  Normal  Starting                 5m2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m2s (x8 over 5m2s)  kubelet          Node multinode-659000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m2s (x8 over 5m2s)  kubelet          Node multinode-659000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m2s (x7 over 5m2s)  kubelet          Node multinode-659000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m54s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m54s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m54s                kubelet          Node multinode-659000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m54s                kubelet          Node multinode-659000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m54s                kubelet          Node multinode-659000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m50s                node-controller  Node multinode-659000 event: Registered Node multinode-659000 in Controller
	  Normal  NodeReady                4m27s                kubelet          Node multinode-659000 status is now: NodeReady
	
	
	Name:               multinode-659000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-659000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	                    minikube.k8s.io/name=multinode-659000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_01_27T12_15_08_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 12:15:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-659000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 12:16:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 12:16:09 +0000   Mon, 27 Jan 2025 12:15:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 12:16:09 +0000   Mon, 27 Jan 2025 12:15:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 12:16:09 +0000   Mon, 27 Jan 2025 12:15:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 12:16:09 +0000   Mon, 27 Jan 2025 12:15:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.199.129
	  Hostname:    multinode-659000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 30ce15ff72904b54b07c49f3e2f28802
	  System UUID:                b6923799-fa1e-b54c-9340-50dd6a2378f5
	  Boot ID:                    3308d183-ec79-4aeb-9d90-80d47cdbff63
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-ktfxc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kube-system                 kindnet-n7vjl               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      106s
	  kube-system                 kube-proxy-pjhc8            0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 93s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  106s (x2 over 106s)  kubelet          Node multinode-659000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s (x2 over 106s)  kubelet          Node multinode-659000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s (x2 over 106s)  kubelet          Node multinode-659000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  106s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           105s                 node-controller  Node multinode-659000-m02 event: Registered Node multinode-659000-m02 in Controller
	  Normal  NodeReady                77s                  kubelet          Node multinode-659000-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +45.741363] systemd-fstab-generator[642]: Ignoring "noauto" option for root device
	[  +0.154343] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[Jan27 12:11] systemd-fstab-generator[1011]: Ignoring "noauto" option for root device
	[  +0.107250] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.524712] systemd-fstab-generator[1051]: Ignoring "noauto" option for root device
	[  +0.194246] systemd-fstab-generator[1063]: Ignoring "noauto" option for root device
	[  +0.228489] systemd-fstab-generator[1077]: Ignoring "noauto" option for root device
	[  +2.820138] systemd-fstab-generator[1301]: Ignoring "noauto" option for root device
	[  +0.185188] systemd-fstab-generator[1313]: Ignoring "noauto" option for root device
	[  +0.189821] systemd-fstab-generator[1325]: Ignoring "noauto" option for root device
	[  +0.241193] systemd-fstab-generator[1340]: Ignoring "noauto" option for root device
	[ +10.927847] systemd-fstab-generator[1440]: Ignoring "noauto" option for root device
	[  +0.101131] kauditd_printk_skb: 206 callbacks suppressed
	[  +3.687721] systemd-fstab-generator[1700]: Ignoring "noauto" option for root device
	[  +5.173667] systemd-fstab-generator[1842]: Ignoring "noauto" option for root device
	[  +0.092604] kauditd_printk_skb: 74 callbacks suppressed
	[  +8.068242] systemd-fstab-generator[2271]: Ignoring "noauto" option for root device
	[  +0.117515] kauditd_printk_skb: 62 callbacks suppressed
	[Jan27 12:12] systemd-fstab-generator[2368]: Ignoring "noauto" option for root device
	[  +0.671740] hrtimer: interrupt took 2890192 ns
	[  +0.642714] kauditd_printk_skb: 34 callbacks suppressed
	[  +8.912195] kauditd_printk_skb: 29 callbacks suppressed
	[Jan27 12:16] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [5f274e5a8851] <==
	{"level":"info","ts":"2025-01-27T12:11:54.079341Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T12:11:54.079639Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T12:11:54.080672Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T12:11:54.081135Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T12:11:54.082237Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-27T12:11:54.084663Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T12:11:54.086467Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-01-27T12:11:54.088079Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T12:11:54.093773Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.29.204.17:2379"}
	{"level":"info","ts":"2025-01-27T12:11:54.094231Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d020e240c474bd89","local-member-id":"925e6945be3a5b5b","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T12:11:54.102412Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T12:11:54.109746Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T12:12:08.875914Z","caller":"traceutil/trace.go:171","msg":"trace[1919271846] linearizableReadLoop","detail":"{readStateIndex:412; appliedIndex:411; }","duration":"177.754492ms","start":"2025-01-27T12:12:08.698138Z","end":"2025-01-27T12:12:08.875892Z","steps":["trace[1919271846] 'read index received'  (duration: 177.463192ms)","trace[1919271846] 'applied index is now lower than readState.Index'  (duration: 290.8µs)"],"step_count":2}
	{"level":"info","ts":"2025-01-27T12:12:08.876370Z","caller":"traceutil/trace.go:171","msg":"trace[107990277] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"234.839525ms","start":"2025-01-27T12:12:08.641517Z","end":"2025-01-27T12:12:08.876357Z","steps":["trace[107990277] 'process raft request'  (duration: 234.148026ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:12:08.876636Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.468091ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-659000\" limit:1 ","response":"range_response_count:1 size:4487"}
	{"level":"info","ts":"2025-01-27T12:12:08.876675Z","caller":"traceutil/trace.go:171","msg":"trace[1113791383] range","detail":"{range_begin:/registry/minions/multinode-659000; range_end:; response_count:1; response_revision:399; }","duration":"178.580391ms","start":"2025-01-27T12:12:08.698086Z","end":"2025-01-27T12:12:08.876666Z","steps":["trace[1113791383] 'agreement among raft nodes before linearized reading'  (duration: 178.453891ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:12:08.876921Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.124822ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:12:08.877011Z","caller":"traceutil/trace.go:171","msg":"trace[2039584234] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:399; }","duration":"152.260722ms","start":"2025-01-27T12:12:08.724737Z","end":"2025-01-27T12:12:08.876998Z","steps":["trace[2039584234] 'agreement among raft nodes before linearized reading'  (duration: 152.146522ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:12:09.012966Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.128082ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:12:09.013051Z","caller":"traceutil/trace.go:171","msg":"trace[1322438069] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:399; }","duration":"104.308182ms","start":"2025-01-27T12:12:08.908724Z","end":"2025-01-27T12:12:09.013033Z","steps":["trace[1322438069] 'range keys from in-memory index tree'  (duration: 103.976982ms)"],"step_count":1}
	{"level":"info","ts":"2025-01-27T12:15:12.859534Z","caller":"traceutil/trace.go:171","msg":"trace[1281735961] transaction","detail":"{read_only:false; response_revision:612; number_of_response:1; }","duration":"145.747467ms","start":"2025-01-27T12:15:12.713766Z","end":"2025-01-27T12:15:12.859513Z","steps":["trace[1281735961] 'process raft request'  (duration: 145.375463ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:15:18.256983Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.760222ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/172.29.204.17\" limit:1 ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2025-01-27T12:15:18.259812Z","caller":"traceutil/trace.go:171","msg":"trace[1632169658] range","detail":"{range_begin:/registry/masterleases/172.29.204.17; range_end:; response_count:1; response_revision:619; }","duration":"204.753748ms","start":"2025-01-27T12:15:18.055043Z","end":"2025-01-27T12:15:18.259797Z","steps":["trace[1632169658] 'range keys from in-memory index tree'  (duration: 201.556321ms)"],"step_count":1}
	{"level":"warn","ts":"2025-01-27T12:15:18.257028Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"348.347276ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-01-27T12:15:18.261235Z","caller":"traceutil/trace.go:171","msg":"trace[512091541] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:619; }","duration":"352.503412ms","start":"2025-01-27T12:15:17.908665Z","end":"2025-01-27T12:15:18.261169Z","steps":["trace[512091541] 'range keys from in-memory index tree'  (duration: 348.338076ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:16:53 up 6 min,  0 users,  load average: 1.25, 0.78, 0.35
	Linux multinode-659000 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [d758000dda95] <==
	I0127 12:15:44.854260       1 main.go:301] handling current node
	I0127 12:15:54.855781       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:15:54.855819       1 main.go:301] handling current node
	I0127 12:15:54.855842       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:15:54.855848       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:16:04.860361       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:16:04.860398       1 main.go:301] handling current node
	I0127 12:16:04.860415       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:16:04.860422       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:16:14.853458       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:16:14.853594       1 main.go:301] handling current node
	I0127 12:16:14.853614       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:16:14.853622       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:16:24.857076       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:16:24.857170       1 main.go:301] handling current node
	I0127 12:16:24.857191       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:16:24.857199       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:16:34.854350       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:16:34.854506       1 main.go:301] handling current node
	I0127 12:16:34.854528       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:16:34.854537       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:16:44.856395       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:16:44.856581       1 main.go:301] handling current node
	I0127 12:16:44.856624       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:16:44.856633       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [f91e9c2d3ba6] <==
	I0127 12:11:56.498342       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0127 12:11:56.505158       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0127 12:11:56.505241       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0127 12:11:57.834204       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 12:11:57.907714       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 12:11:58.062684       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0127 12:11:58.078004       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.29.204.17]
	I0127 12:11:58.079125       1 controller.go:615] quota admission added evaluator for: endpoints
	I0127 12:11:58.091255       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0127 12:11:58.606247       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0127 12:11:59.161644       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0127 12:11:59.224778       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0127 12:11:59.264773       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0127 12:12:03.904645       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0127 12:12:03.981891       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0127 12:16:08.605036       1 conn.go:339] Error on socket receive: read tcp 172.29.204.17:8443->172.29.192.1:51757: use of closed network connection
	E0127 12:16:09.141350       1 conn.go:339] Error on socket receive: read tcp 172.29.204.17:8443->172.29.192.1:51759: use of closed network connection
	E0127 12:16:09.737085       1 conn.go:339] Error on socket receive: read tcp 172.29.204.17:8443->172.29.192.1:51762: use of closed network connection
	E0127 12:16:10.242994       1 conn.go:339] Error on socket receive: read tcp 172.29.204.17:8443->172.29.192.1:51764: use of closed network connection
	E0127 12:16:10.727117       1 conn.go:339] Error on socket receive: read tcp 172.29.204.17:8443->172.29.192.1:51766: use of closed network connection
	E0127 12:16:11.251508       1 conn.go:339] Error on socket receive: read tcp 172.29.204.17:8443->172.29.192.1:51768: use of closed network connection
	E0127 12:16:12.202580       1 conn.go:339] Error on socket receive: read tcp 172.29.204.17:8443->172.29.192.1:51771: use of closed network connection
	E0127 12:16:22.738539       1 conn.go:339] Error on socket receive: read tcp 172.29.204.17:8443->172.29.192.1:51773: use of closed network connection
	E0127 12:16:23.218346       1 conn.go:339] Error on socket receive: read tcp 172.29.204.17:8443->172.29.192.1:51775: use of closed network connection
	E0127 12:16:33.693741       1 conn.go:339] Error on socket receive: read tcp 172.29.204.17:8443->172.29.192.1:51777: use of closed network connection
	
	
	==> kube-controller-manager [e07a66f8f619] <==
	I0127 12:15:07.611410       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m02\" does not exist"
	I0127 12:15:07.630009       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-659000-m02" podCIDRs=["10.244.1.0/24"]
	I0127 12:15:07.631297       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:15:07.631526       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:15:07.655401       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:15:07.883346       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:15:08.169505       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000-m02"
	I0127 12:15:08.255644       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:15:08.418223       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:15:17.811768       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:15:36.752543       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:15:36.753915       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:15:36.769807       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:15:38.199464       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:15:38.449749       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:16:02.550786       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="103.313802ms"
	I0127 12:16:02.585867       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="34.67067ms"
	I0127 12:16:02.586257       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="347.903µs"
	I0127 12:16:02.588870       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="48.6µs"
	I0127 12:16:05.434486       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="13.589639ms"
	I0127 12:16:05.435765       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="54.401µs"
	I0127 12:16:05.890170       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="9.003392ms"
	I0127 12:16:05.890477       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="36.901µs"
	I0127 12:16:09.305780       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:16:33.434322       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	
	
	==> kube-proxy [bbec7ccef7da] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 12:12:05.352123       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 12:12:05.378799       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.204.17"]
	E0127 12:12:05.378872       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 12:12:05.470419       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 12:12:05.470552       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 12:12:05.470596       1 server_linux.go:170] "Using iptables Proxier"
	I0127 12:12:05.475557       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 12:12:05.476697       1 server.go:497] "Version info" version="v1.32.1"
	I0127 12:12:05.476717       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:12:05.478788       1 config.go:199] "Starting service config controller"
	I0127 12:12:05.478844       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 12:12:05.478916       1 config.go:105] "Starting endpoint slice config controller"
	I0127 12:12:05.479018       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 12:12:05.480053       1 config.go:329] "Starting node config controller"
	I0127 12:12:05.480113       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 12:12:05.579605       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 12:12:05.579669       1 shared_informer.go:320] Caches are synced for service config
	I0127 12:12:05.580463       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a16e06a03860] <==
	W0127 12:11:56.846817       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0127 12:11:56.847194       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 12:11:56.871314       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0127 12:11:56.872178       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:11:56.887386       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 12:11:56.887549       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:11:56.918642       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 12:11:56.919135       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:11:57.039216       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 12:11:57.039707       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:11:57.055169       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 12:11:57.055233       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 12:11:57.106656       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 12:11:57.106828       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:11:57.214186       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 12:11:57.214290       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:11:57.298150       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 12:11:57.298337       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:11:57.310098       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 12:11:57.310312       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 12:11:57.312117       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 12:11:57.312192       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:11:57.321525       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 12:11:57.321832       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:11:59.701790       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 12:12:26 multinode-659000 kubelet[2279]: I0127 12:12:26.173872    2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsxwc\" (UniqueName: \"kubernetes.io/projected/bcfd7913-1bc0-4c24-882f-2be92ec9b046-kube-api-access-dsxwc\") pod \"storage-provisioner\" (UID: \"bcfd7913-1bc0-4c24-882f-2be92ec9b046\") " pod="kube-system/storage-provisioner"
	Jan 27 12:12:26 multinode-659000 kubelet[2279]: I0127 12:12:26.914987    2279 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a53e133a1cd6ab9514cb15ac3c4f1d5683d17008b482cebb08bf4809e060709"
	Jan 27 12:12:27 multinode-659000 kubelet[2279]: I0127 12:12:27.986830    2279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2qw6w" podStartSLOduration=23.986813412 podStartE2EDuration="23.986813412s" podCreationTimestamp="2025-01-27 12:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 12:12:27.964371092 +0000 UTC m=+28.986664222" watchObservedRunningTime="2025-01-27 12:12:27.986813412 +0000 UTC m=+29.009106542"
	Jan 27 12:12:59 multinode-659000 kubelet[2279]: E0127 12:12:59.264983    2279 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 12:12:59 multinode-659000 kubelet[2279]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 12:12:59 multinode-659000 kubelet[2279]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 12:12:59 multinode-659000 kubelet[2279]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 12:12:59 multinode-659000 kubelet[2279]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 12:13:59 multinode-659000 kubelet[2279]: E0127 12:13:59.264023    2279 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 12:13:59 multinode-659000 kubelet[2279]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 12:13:59 multinode-659000 kubelet[2279]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 12:13:59 multinode-659000 kubelet[2279]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 12:13:59 multinode-659000 kubelet[2279]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 12:14:59 multinode-659000 kubelet[2279]: E0127 12:14:59.273510    2279 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 12:14:59 multinode-659000 kubelet[2279]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 12:14:59 multinode-659000 kubelet[2279]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 12:14:59 multinode-659000 kubelet[2279]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 12:14:59 multinode-659000 kubelet[2279]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 12:15:59 multinode-659000 kubelet[2279]: E0127 12:15:59.265750    2279 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 12:15:59 multinode-659000 kubelet[2279]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 12:15:59 multinode-659000 kubelet[2279]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 12:15:59 multinode-659000 kubelet[2279]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 12:15:59 multinode-659000 kubelet[2279]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 12:16:02 multinode-659000 kubelet[2279]: I0127 12:16:02.525255    2279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=231.525202392 podStartE2EDuration="3m51.525202392s" podCreationTimestamp="2025-01-27 12:12:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 12:12:28.018751394 +0000 UTC m=+29.041044524" watchObservedRunningTime="2025-01-27 12:16:02.525202392 +0000 UTC m=+243.547495622"
	Jan 27 12:16:02 multinode-659000 kubelet[2279]: I0127 12:16:02.700229    2279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpzlq\" (UniqueName: \"kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq\") pod \"busybox-58667487b6-2jq9j\" (UID: \"244fa7e9-f6c4-46a7-b61f-8717e13fd270\") " pod="default/busybox-58667487b6-2jq9j"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-659000 -n multinode-659000
E0127 12:17:04.029502    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-659000 -n multinode-659000: (12.0213388s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-659000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (56.66s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (471.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-659000
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-659000
E0127 12:32:04.038534    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-659000: (1m35.79254s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-659000 --wait=true -v=8 --alsologtostderr
E0127 12:35:07.126748    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 12:35:47.481838    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 12:37:04.044285    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-659000 --wait=true -v=8 --alsologtostderr: exit status 1 (5m25.1398144s)

                                                
                                                
-- stdout --
	* [multinode-659000] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-659000" primary control-plane node in "multinode-659000" cluster
	* Restarting existing hyperv VM for "multinode-659000" ...
	* Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	
	* Starting "multinode-659000-m02" worker node in "multinode-659000" cluster
	* Restarting existing hyperv VM for "multinode-659000-m02" ...
	* Found network options:
	  - NO_PROXY=172.29.198.106
	  - NO_PROXY=172.29.198.106
	* Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	  - env NO_PROXY=172.29.198.106

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:33:36.181635    9948 out.go:345] Setting OutFile to fd 1164 ...
	I0127 12:33:36.251813    9948 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:33:36.251813    9948 out.go:358] Setting ErrFile to fd 1144...
	I0127 12:33:36.251813    9948 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:33:36.274144    9948 out.go:352] Setting JSON to false
	I0127 12:33:36.277140    9948 start.go:129] hostinfo: {"hostname":"minikube6","uptime":444199,"bootTime":1737537016,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5371 Build 19045.5371","kernelVersion":"10.0.19045.5371 Build 19045.5371","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0127 12:33:36.277140    9948 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0127 12:33:36.351296    9948 out.go:177] * [multinode-659000] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	I0127 12:33:36.363777    9948 notify.go:220] Checking for updates...
	I0127 12:33:36.370556    9948 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0127 12:33:36.405648    9948 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:33:36.419948    9948 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0127 12:33:36.433252    9948 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 12:33:36.454182    9948 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:33:36.460900    9948 config.go:182] Loaded profile config "multinode-659000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:33:36.460900    9948 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:33:41.755333    9948 out.go:177] * Using the hyperv driver based on existing profile
	I0127 12:33:41.765467    9948 start.go:297] selected driver: hyperv
	I0127 12:33:41.765467    9948 start.go:901] validating driver "hyperv" against &{Name:multinode-659000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-659000 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.204.17 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.199.129 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.29.206.88 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fals
e istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:33:41.765467    9948 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:33:41.817079    9948 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:33:41.817079    9948 cni.go:84] Creating CNI manager for ""
	I0127 12:33:41.817079    9948 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0127 12:33:41.817658    9948 start.go:340] cluster config:
	{Name:multinode-659000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-659000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.204.17 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.199.129 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.29.206.88 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:f
alse logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:33:41.817803    9948 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:33:41.912388    9948 out.go:177] * Starting "multinode-659000" primary control-plane node in "multinode-659000" cluster
	I0127 12:33:41.917744    9948 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0127 12:33:41.918332    9948 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
	I0127 12:33:41.918332    9948 cache.go:56] Caching tarball of preloaded images
	I0127 12:33:41.918796    9948 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 12:33:41.918796    9948 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0127 12:33:41.919337    9948 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\config.json ...
	I0127 12:33:41.924139    9948 start.go:360] acquireMachinesLock for multinode-659000: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:33:41.924560    9948 start.go:364] duration metric: took 115.2µs to acquireMachinesLock for "multinode-659000"
	I0127 12:33:41.925668    9948 start.go:96] Skipping create...Using existing machine configuration
	I0127 12:33:41.925668    9948 fix.go:54] fixHost starting: 
	I0127 12:33:41.926312    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:33:44.612570    9948 main.go:141] libmachine: [stdout =====>] : Off
	
	I0127 12:33:44.612657    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:33:44.612657    9948 fix.go:112] recreateIfNeeded on multinode-659000: state=Stopped err=<nil>
	W0127 12:33:44.612657    9948 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 12:33:44.756131    9948 out.go:177] * Restarting existing hyperv VM for "multinode-659000" ...
	I0127 12:33:44.804183    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-659000
	I0127 12:33:47.819240    9948 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:33:47.820017    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:33:47.820017    9948 main.go:141] libmachine: Waiting for host to start...
	I0127 12:33:47.820073    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:33:49.973733    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:33:49.973733    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:33:49.973733    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:33:52.378547    9948 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:33:52.378547    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:33:53.380366    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:33:55.489072    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:33:55.489889    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:33:55.489975    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:33:57.988771    9948 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:33:57.988771    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:33:58.988973    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:34:01.184614    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:34:01.184614    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:01.184614    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:34:03.677566    9948 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:34:03.677662    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:04.677924    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:34:06.826044    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:34:06.826044    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:06.826140    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:34:09.249700    9948 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:34:09.249700    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:10.251014    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:34:12.403029    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:34:12.403029    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:12.403319    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:34:14.858430    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:34:14.858430    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:14.861484    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:34:16.940482    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:34:16.940482    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:16.940482    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:34:19.405200    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:34:19.405200    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:19.405200    9948 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\config.json ...
	I0127 12:34:19.408976    9948 machine.go:93] provisionDockerMachine start ...
	I0127 12:34:19.409295    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:34:21.464326    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:34:21.464326    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:21.464326    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:34:23.974791    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:34:23.975617    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:23.980768    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:34:23.981360    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.198.106 22 <nil> <nil>}
	I0127 12:34:23.981360    9948 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 12:34:24.122270    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 12:34:24.122270    9948 buildroot.go:166] provisioning hostname "multinode-659000"
	I0127 12:34:24.122270    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:34:26.208942    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:34:26.209389    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:26.209479    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:34:28.645671    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:34:28.645949    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:28.650839    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:34:28.650839    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.198.106 22 <nil> <nil>}
	I0127 12:34:28.650839    9948 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-659000 && echo "multinode-659000" | sudo tee /etc/hostname
	I0127 12:34:28.808809    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-659000
	
	I0127 12:34:28.808951    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:34:30.823522    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:34:30.823665    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:30.823720    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:34:33.232639    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:34:33.232639    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:33.238810    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:34:33.239010    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.198.106 22 <nil> <nil>}
	I0127 12:34:33.239010    9948 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-659000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-659000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-659000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:34:33.394842    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:34:33.394842    9948 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0127 12:34:33.394842    9948 buildroot.go:174] setting up certificates
	I0127 12:34:33.394842    9948 provision.go:84] configureAuth start
	I0127 12:34:33.394842    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:34:35.443924    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:34:35.444484    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:35.444592    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:34:37.821223    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:34:37.821223    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:37.821990    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:34:39.846534    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:34:39.846663    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:39.846663    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:34:42.243984    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:34:42.244935    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:42.244935    9948 provision.go:143] copyHostCerts
	I0127 12:34:42.245205    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0127 12:34:42.245326    9948 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0127 12:34:42.245326    9948 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0127 12:34:42.245919    9948 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0127 12:34:42.246658    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0127 12:34:42.247407    9948 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0127 12:34:42.247407    9948 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0127 12:34:42.247760    9948 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0127 12:34:42.248604    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0127 12:34:42.248604    9948 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0127 12:34:42.249132    9948 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0127 12:34:42.249338    9948 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0127 12:34:42.250527    9948 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-659000 san=[127.0.0.1 172.29.198.106 localhost minikube multinode-659000]
	I0127 12:34:42.435902    9948 provision.go:177] copyRemoteCerts
	I0127 12:34:42.446432    9948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:34:42.447011    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:34:44.441075    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:34:44.441992    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:44.442060    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:34:46.880196    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:34:46.881114    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:46.881684    9948 sshutil.go:53] new ssh client: &{IP:172.29.198.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000\id_rsa Username:docker}
	I0127 12:34:46.990000    9948 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5429415s)
	I0127 12:34:46.990000    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0127 12:34:46.990601    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 12:34:47.032978    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0127 12:34:47.033578    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0127 12:34:47.086735    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0127 12:34:47.087326    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 12:34:47.130626    9948 provision.go:87] duration metric: took 13.7356397s to configureAuth
	I0127 12:34:47.130626    9948 buildroot.go:189] setting minikube options for container-runtime
	I0127 12:34:47.131301    9948 config.go:182] Loaded profile config "multinode-659000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:34:47.131301    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:34:49.119922    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:34:49.119922    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:49.120788    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:34:51.515761    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:34:51.516107    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:51.522691    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:34:51.523381    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.198.106 22 <nil> <nil>}
	I0127 12:34:51.523381    9948 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0127 12:34:51.655115    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0127 12:34:51.655115    9948 buildroot.go:70] root file system type: tmpfs
	I0127 12:34:51.655115    9948 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0127 12:34:51.655115    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:34:53.659970    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:34:53.659970    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:53.659970    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:34:56.093986    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:34:56.093986    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:56.099701    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:34:56.100348    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.198.106 22 <nil> <nil>}
	I0127 12:34:56.100348    9948 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0127 12:34:56.266086    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0127 12:34:56.266086    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:34:58.267768    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:34:58.268056    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:58.268056    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:35:00.723131    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:35:00.723131    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:35:00.728427    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:35:00.729159    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.198.106 22 <nil> <nil>}
	I0127 12:35:00.729159    9948 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0127 12:35:03.256939    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0127 12:35:03.257053    9948 machine.go:96] duration metric: took 43.8476164s to provisionDockerMachine
	I0127 12:35:03.257053    9948 start.go:293] postStartSetup for "multinode-659000" (driver="hyperv")
	I0127 12:35:03.257053    9948 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:35:03.267563    9948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:35:03.267563    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:35:05.316508    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:35:05.316508    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:35:05.316664    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:35:07.700356    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:35:07.700593    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:35:07.700593    9948 sshutil.go:53] new ssh client: &{IP:172.29.198.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000\id_rsa Username:docker}
	I0127 12:35:07.811310    9948 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5435572s)
	I0127 12:35:07.821716    9948 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:35:07.829117    9948 command_runner.go:130] > NAME=Buildroot
	I0127 12:35:07.829198    9948 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0127 12:35:07.829198    9948 command_runner.go:130] > ID=buildroot
	I0127 12:35:07.829198    9948 command_runner.go:130] > VERSION_ID=2023.02.9
	I0127 12:35:07.829198    9948 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0127 12:35:07.829325    9948 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 12:35:07.829391    9948 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0127 12:35:07.829690    9948 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0127 12:35:07.830620    9948 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> 59562.pem in /etc/ssl/certs
	I0127 12:35:07.830620    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> /etc/ssl/certs/59562.pem
	I0127 12:35:07.846327    9948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:35:07.871475    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem --> /etc/ssl/certs/59562.pem (1708 bytes)
	I0127 12:35:07.917276    9948 start.go:296] duration metric: took 4.6601745s for postStartSetup
	I0127 12:35:07.917514    9948 fix.go:56] duration metric: took 1m25.9908456s for fixHost
	I0127 12:35:07.917588    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:35:09.946554    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:35:09.946554    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:35:09.946642    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:35:12.420548    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:35:12.421287    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:35:12.425141    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:35:12.425955    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.198.106 22 <nil> <nil>}
	I0127 12:35:12.425955    9948 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 12:35:12.561877    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737981312.574710952
	
	I0127 12:35:12.561877    9948 fix.go:216] guest clock: 1737981312.574710952
	I0127 12:35:12.561877    9948 fix.go:229] Guest: 2025-01-27 12:35:12.574710952 +0000 UTC Remote: 2025-01-27 12:35:07.9175148 +0000 UTC m=+91.825743201 (delta=4.657196152s)
	I0127 12:35:12.561877    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:35:14.604407    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:35:14.604407    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:35:14.605231    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:35:17.014500    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:35:17.015341    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:35:17.020755    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:35:17.021344    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.198.106 22 <nil> <nil>}
	I0127 12:35:17.021344    9948 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1737981312
	I0127 12:35:17.172109    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 27 12:35:12 UTC 2025
	
	I0127 12:35:17.172250    9948 fix.go:236] clock set: Mon Jan 27 12:35:12 UTC 2025
	 (err=<nil>)
	I0127 12:35:17.172250    9948 start.go:83] releasing machines lock for "multinode-659000", held for 1m35.2466899s
	I0127 12:35:17.172582    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:35:19.201686    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:35:19.201800    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:35:19.201800    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:35:21.659472    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:35:21.659472    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:35:21.664728    9948 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0127 12:35:21.664805    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:35:21.676633    9948 ssh_runner.go:195] Run: cat /version.json
	I0127 12:35:21.676891    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:35:23.813105    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:35:23.813105    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:35:23.813105    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:35:23.813729    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:35:23.813729    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:35:23.814092    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:35:26.358433    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:35:26.359150    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:35:26.359862    9948 sshutil.go:53] new ssh client: &{IP:172.29.198.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000\id_rsa Username:docker}
	I0127 12:35:26.380896    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:35:26.380896    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:35:26.381944    9948 sshutil.go:53] new ssh client: &{IP:172.29.198.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000\id_rsa Username:docker}
	I0127 12:35:26.456310    9948 command_runner.go:130] > {"iso_version": "v1.35.0", "kicbase_version": "v0.0.45-1736763277-20236", "minikube_version": "v1.35.0", "commit": "3fb24bd87c8c8761e2515e1a9ee13835a389ed68"}
	I0127 12:35:26.456415    9948 ssh_runner.go:235] Completed: cat /version.json: (4.7796547s)
	I0127 12:35:26.468432    9948 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0127 12:35:26.469086    9948 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8043085s)
	W0127 12:35:26.469086    9948 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0127 12:35:26.470470    9948 ssh_runner.go:195] Run: systemctl --version
	I0127 12:35:26.479670    9948 command_runner.go:130] > systemd 252 (252)
	I0127 12:35:26.479670    9948 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0127 12:35:26.491518    9948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0127 12:35:26.498399    9948 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0127 12:35:26.498399    9948 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 12:35:26.511161    9948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:35:26.536519    9948 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0127 12:35:26.536519    9948 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 12:35:26.536519    9948 start.go:495] detecting cgroup driver to use...
	I0127 12:35:26.536519    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:35:26.570419    9948 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	W0127 12:35:26.583646    9948 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0127 12:35:26.583646    9948 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0127 12:35:26.588643    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 12:35:26.616375    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 12:35:26.634640    9948 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 12:35:26.644912    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 12:35:26.673793    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:35:26.701860    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 12:35:26.731973    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:35:26.759279    9948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:35:26.787275    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 12:35:26.816442    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 12:35:26.846113    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 12:35:26.875684    9948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:35:26.893061    9948 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:35:26.893259    9948 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:35:26.905737    9948 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 12:35:26.938047    9948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:35:26.968644    9948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:35:27.155657    9948 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 12:35:27.182779    9948 start.go:495] detecting cgroup driver to use...
	I0127 12:35:27.193269    9948 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0127 12:35:27.217485    9948 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0127 12:35:27.217485    9948 command_runner.go:130] > [Unit]
	I0127 12:35:27.217536    9948 command_runner.go:130] > Description=Docker Application Container Engine
	I0127 12:35:27.217536    9948 command_runner.go:130] > Documentation=https://docs.docker.com
	I0127 12:35:27.217536    9948 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0127 12:35:27.217536    9948 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0127 12:35:27.217536    9948 command_runner.go:130] > StartLimitBurst=3
	I0127 12:35:27.217585    9948 command_runner.go:130] > StartLimitIntervalSec=60
	I0127 12:35:27.217656    9948 command_runner.go:130] > [Service]
	I0127 12:35:27.217656    9948 command_runner.go:130] > Type=notify
	I0127 12:35:27.217656    9948 command_runner.go:130] > Restart=on-failure
	I0127 12:35:27.217721    9948 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0127 12:35:27.217721    9948 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0127 12:35:27.217721    9948 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0127 12:35:27.217721    9948 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0127 12:35:27.217721    9948 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0127 12:35:27.217721    9948 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0127 12:35:27.217721    9948 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0127 12:35:27.217811    9948 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0127 12:35:27.217811    9948 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0127 12:35:27.217867    9948 command_runner.go:130] > ExecStart=
	I0127 12:35:27.217891    9948 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0127 12:35:27.217891    9948 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0127 12:35:27.217920    9948 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0127 12:35:27.217949    9948 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0127 12:35:27.217949    9948 command_runner.go:130] > LimitNOFILE=infinity
	I0127 12:35:27.217949    9948 command_runner.go:130] > LimitNPROC=infinity
	I0127 12:35:27.217949    9948 command_runner.go:130] > LimitCORE=infinity
	I0127 12:35:27.217949    9948 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0127 12:35:27.217949    9948 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0127 12:35:27.218007    9948 command_runner.go:130] > TasksMax=infinity
	I0127 12:35:27.218007    9948 command_runner.go:130] > TimeoutStartSec=0
	I0127 12:35:27.218007    9948 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0127 12:35:27.218007    9948 command_runner.go:130] > Delegate=yes
	I0127 12:35:27.218007    9948 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0127 12:35:27.218052    9948 command_runner.go:130] > KillMode=process
	I0127 12:35:27.218052    9948 command_runner.go:130] > [Install]
	I0127 12:35:27.218052    9948 command_runner.go:130] > WantedBy=multi-user.target
	I0127 12:35:27.228679    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 12:35:27.261181    9948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 12:35:27.299697    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 12:35:27.331802    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 12:35:27.362225    9948 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 12:35:27.425537    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 12:35:27.447502    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:35:27.478887    9948 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0127 12:35:27.489443    9948 ssh_runner.go:195] Run: which cri-dockerd
	I0127 12:35:27.495409    9948 command_runner.go:130] > /usr/bin/cri-dockerd
	I0127 12:35:27.505510    9948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0127 12:35:27.525120    9948 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0127 12:35:27.564210    9948 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0127 12:35:27.750206    9948 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0127 12:35:27.928554    9948 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0127 12:35:27.928850    9948 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0127 12:35:27.970096    9948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:35:28.170454    9948 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0127 12:35:30.856767    9948 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.686234s)
	I0127 12:35:30.868578    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0127 12:35:30.900902    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0127 12:35:30.939319    9948 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0127 12:35:31.146599    9948 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0127 12:35:31.332394    9948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:35:31.498147    9948 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0127 12:35:31.536968    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0127 12:35:31.569205    9948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:35:31.743832    9948 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0127 12:35:31.839150    9948 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0127 12:35:31.851132    9948 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0127 12:35:31.862665    9948 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0127 12:35:31.862665    9948 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0127 12:35:31.862665    9948 command_runner.go:130] > Device: 0,22	Inode: 848         Links: 1
	I0127 12:35:31.862665    9948 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0127 12:35:31.862665    9948 command_runner.go:130] > Access: 2025-01-27 12:35:31.778144827 +0000
	I0127 12:35:31.862665    9948 command_runner.go:130] > Modify: 2025-01-27 12:35:31.778144827 +0000
	I0127 12:35:31.862665    9948 command_runner.go:130] > Change: 2025-01-27 12:35:31.781144837 +0000
	I0127 12:35:31.862665    9948 command_runner.go:130] >  Birth: -
	I0127 12:35:31.862665    9948 start.go:563] Will wait 60s for crictl version
	I0127 12:35:31.872553    9948 ssh_runner.go:195] Run: which crictl
	I0127 12:35:31.879243    9948 command_runner.go:130] > /usr/bin/crictl
	I0127 12:35:31.888699    9948 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:35:31.943263    9948 command_runner.go:130] > Version:  0.1.0
	I0127 12:35:31.943263    9948 command_runner.go:130] > RuntimeName:  docker
	I0127 12:35:31.943263    9948 command_runner.go:130] > RuntimeVersion:  27.4.0
	I0127 12:35:31.943320    9948 command_runner.go:130] > RuntimeApiVersion:  v1
	I0127 12:35:31.943320    9948 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0127 12:35:31.956537    9948 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 12:35:31.989370    9948 command_runner.go:130] > 27.4.0
	I0127 12:35:31.998230    9948 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 12:35:32.026782    9948 command_runner.go:130] > 27.4.0
	I0127 12:35:32.030346    9948 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0127 12:35:32.030579    9948 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0127 12:35:32.035536    9948 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0127 12:35:32.035536    9948 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0127 12:35:32.035536    9948 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0127 12:35:32.035536    9948 ip.go:211] Found interface: {Index:17 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:43:05:a6 Flags:up|broadcast|multicast|running}
	I0127 12:35:32.038296    9948 ip.go:214] interface addr: fe80::8ceb:a58b:811a:7c79/64
	I0127 12:35:32.039357    9948 ip.go:214] interface addr: 172.29.192.1/20
	I0127 12:35:32.052435    9948 ssh_runner.go:195] Run: grep 172.29.192.1	host.minikube.internal$ /etc/hosts
	I0127 12:35:32.058836    9948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:35:32.083737    9948 kubeadm.go:883] updating cluster {Name:multinode-659000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-659000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.198.106 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.199.129 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.29.206.88 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:fa
lse istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 12:35:32.084263    9948 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0127 12:35:32.094131    9948 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0127 12:35:32.121250    9948 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.32.1
	I0127 12:35:32.121250    9948 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.32.1
	I0127 12:35:32.121250    9948 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 12:35:32.121250    9948 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.32.1
	I0127 12:35:32.122089    9948 command_runner.go:130] > kindest/kindnetd:v20241108-5c6d2daf
	I0127 12:35:32.122089    9948 command_runner.go:130] > registry.k8s.io/etcd:3.5.16-0
	I0127 12:35:32.122089    9948 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0127 12:35:32.122089    9948 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0127 12:35:32.122089    9948 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:35:32.122089    9948 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0127 12:35:32.122089    9948 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	kindest/kindnetd:v20241108-5c6d2daf
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0127 12:35:32.122089    9948 docker.go:619] Images already preloaded, skipping extraction
	I0127 12:35:32.131547    9948 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0127 12:35:32.156708    9948 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.32.1
	I0127 12:35:32.156708    9948 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 12:35:32.156708    9948 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.32.1
	I0127 12:35:32.156788    9948 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.32.1
	I0127 12:35:32.156788    9948 command_runner.go:130] > kindest/kindnetd:v20241108-5c6d2daf
	I0127 12:35:32.156823    9948 command_runner.go:130] > registry.k8s.io/etcd:3.5.16-0
	I0127 12:35:32.156823    9948 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0127 12:35:32.156823    9948 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0127 12:35:32.156823    9948 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:35:32.156823    9948 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0127 12:35:32.156888    9948 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	kindest/kindnetd:v20241108-5c6d2daf
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0127 12:35:32.156888    9948 cache_images.go:84] Images are preloaded, skipping loading
	I0127 12:35:32.157020    9948 kubeadm.go:934] updating node { 172.29.198.106 8443 v1.32.1 docker true true} ...
	I0127 12:35:32.157251    9948 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-659000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.198.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:multinode-659000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 12:35:32.166793    9948 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0127 12:35:32.233267    9948 command_runner.go:130] > cgroupfs
	I0127 12:35:32.233385    9948 cni.go:84] Creating CNI manager for ""
	I0127 12:35:32.233385    9948 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0127 12:35:32.233471    9948 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 12:35:32.233540    9948 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.29.198.106 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-659000 NodeName:multinode-659000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.29.198.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.29.198.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 12:35:32.233784    9948 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.29.198.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-659000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.29.198.106"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.29.198.106"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 12:35:32.245885    9948 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 12:35:32.265189    9948 command_runner.go:130] > kubeadm
	I0127 12:35:32.265189    9948 command_runner.go:130] > kubectl
	I0127 12:35:32.265239    9948 command_runner.go:130] > kubelet
	I0127 12:35:32.265239    9948 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 12:35:32.279660    9948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 12:35:32.297475    9948 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0127 12:35:32.326698    9948 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 12:35:32.354455    9948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2300 bytes)
	I0127 12:35:32.396719    9948 ssh_runner.go:195] Run: grep 172.29.198.106	control-plane.minikube.internal$ /etc/hosts
	I0127 12:35:32.403001    9948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.198.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:35:32.433908    9948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:35:32.607554    9948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:35:32.635931    9948 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000 for IP: 172.29.198.106
	I0127 12:35:32.636017    9948 certs.go:194] generating shared ca certs ...
	I0127 12:35:32.636017    9948 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:35:32.636956    9948 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0127 12:35:32.637363    9948 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0127 12:35:32.637578    9948 certs.go:256] generating profile certs ...
	I0127 12:35:32.638317    9948 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\client.key
	I0127 12:35:32.638565    9948 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.key.8dbcec51
	I0127 12:35:32.638703    9948 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.crt.8dbcec51 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.29.198.106]
	I0127 12:35:32.915804    9948 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.crt.8dbcec51 ...
	I0127 12:35:32.916832    9948 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.crt.8dbcec51: {Name:mk0bc2c577d2d85da05a757ce498d238f017bb3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:35:32.917811    9948 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.key.8dbcec51 ...
	I0127 12:35:32.917811    9948 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.key.8dbcec51: {Name:mka016434d6d6285c6597b5a27e613438132168c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:35:32.918411    9948 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.crt.8dbcec51 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.crt
	I0127 12:35:32.932671    9948 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.key.8dbcec51 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.key
	I0127 12:35:32.934971    9948 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\proxy-client.key
	I0127 12:35:32.934971    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0127 12:35:32.935300    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0127 12:35:32.935469    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0127 12:35:32.935535    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0127 12:35:32.935838    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0127 12:35:32.935992    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0127 12:35:32.936305    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0127 12:35:32.936305    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0127 12:35:32.936844    9948 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem (1338 bytes)
	W0127 12:35:32.937452    9948 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956_empty.pem, impossibly tiny 0 bytes
	I0127 12:35:32.937452    9948 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0127 12:35:32.937871    9948 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0127 12:35:32.938226    9948 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0127 12:35:32.938226    9948 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0127 12:35:32.938226    9948 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem (1708 bytes)
	I0127 12:35:32.938226    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:35:32.938226    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem -> /usr/share/ca-certificates/5956.pem
	I0127 12:35:32.939412    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> /usr/share/ca-certificates/59562.pem
	I0127 12:35:32.940639    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 12:35:32.992212    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 12:35:33.031894    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 12:35:33.081403    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 12:35:33.125225    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 12:35:33.166348    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 12:35:33.211858    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 12:35:33.253039    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 12:35:33.300278    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 12:35:33.343433    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem --> /usr/share/ca-certificates/5956.pem (1338 bytes)
	I0127 12:35:33.390186    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem --> /usr/share/ca-certificates/59562.pem (1708 bytes)
	I0127 12:35:33.432257    9948 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 12:35:33.473824    9948 ssh_runner.go:195] Run: openssl version
	I0127 12:35:33.481989    9948 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0127 12:35:33.491533    9948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5956.pem && ln -fs /usr/share/ca-certificates/5956.pem /etc/ssl/certs/5956.pem"
	I0127 12:35:33.517440    9948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5956.pem
	I0127 12:35:33.524004    9948 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 27 10:52 /usr/share/ca-certificates/5956.pem
	I0127 12:35:33.525172    9948 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:52 /usr/share/ca-certificates/5956.pem
	I0127 12:35:33.538098    9948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5956.pem
	I0127 12:35:33.545660    9948 command_runner.go:130] > 51391683
	I0127 12:35:33.556141    9948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5956.pem /etc/ssl/certs/51391683.0"
	I0127 12:35:33.584743    9948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59562.pem && ln -fs /usr/share/ca-certificates/59562.pem /etc/ssl/certs/59562.pem"
	I0127 12:35:33.610589    9948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59562.pem
	I0127 12:35:33.618085    9948 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 27 10:52 /usr/share/ca-certificates/59562.pem
	I0127 12:35:33.618085    9948 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:52 /usr/share/ca-certificates/59562.pem
	I0127 12:35:33.627711    9948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59562.pem
	I0127 12:35:33.635525    9948 command_runner.go:130] > 3ec20f2e
	I0127 12:35:33.645737    9948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59562.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 12:35:33.671803    9948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 12:35:33.699427    9948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:35:33.705546    9948 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 27 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:35:33.705546    9948 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:35:33.715843    9948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:35:33.724183    9948 command_runner.go:130] > b5213941
	I0127 12:35:33.734350    9948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 12:35:33.765332    9948 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:35:33.772366    9948 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:35:33.772366    9948 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0127 12:35:33.772366    9948 command_runner.go:130] > Device: 8,1	Inode: 3148641     Links: 1
	I0127 12:35:33.772466    9948 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0127 12:35:33.772466    9948 command_runner.go:130] > Access: 2025-01-27 12:11:47.940042269 +0000
	I0127 12:35:33.772466    9948 command_runner.go:130] > Modify: 2025-01-27 12:11:47.940042269 +0000
	I0127 12:35:33.772466    9948 command_runner.go:130] > Change: 2025-01-27 12:11:47.940042269 +0000
	I0127 12:35:33.772524    9948 command_runner.go:130] >  Birth: 2025-01-27 12:11:47.940042269 +0000
	I0127 12:35:33.780865    9948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 12:35:33.789536    9948 command_runner.go:130] > Certificate will not expire
	I0127 12:35:33.799657    9948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 12:35:33.807439    9948 command_runner.go:130] > Certificate will not expire
	I0127 12:35:33.817568    9948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 12:35:33.826161    9948 command_runner.go:130] > Certificate will not expire
	I0127 12:35:33.836213    9948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 12:35:33.847913    9948 command_runner.go:130] > Certificate will not expire
	I0127 12:35:33.857820    9948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 12:35:33.866078    9948 command_runner.go:130] > Certificate will not expire
	I0127 12:35:33.875461    9948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 12:35:33.882738    9948 command_runner.go:130] > Certificate will not expire
	I0127 12:35:33.882738    9948 kubeadm.go:392] StartCluster: {Name:multinode-659000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-659000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.198.106 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.199.129 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.29.206.88 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false
istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:35:33.891709    9948 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0127 12:35:33.925944    9948 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 12:35:33.944341    9948 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0127 12:35:33.944341    9948 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0127 12:35:33.944341    9948 command_runner.go:130] > /var/lib/minikube/etcd:
	I0127 12:35:33.944341    9948 command_runner.go:130] > member
	I0127 12:35:33.944341    9948 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 12:35:33.944341    9948 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 12:35:33.955335    9948 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 12:35:33.974424    9948 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 12:35:33.975338    9948 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-659000" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0127 12:35:33.976433    9948 kubeconfig.go:62] C:\Users\jenkins.minikube6\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-659000" cluster setting kubeconfig missing "multinode-659000" context setting]
	I0127 12:35:33.977390    9948 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:35:33.995095    9948 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0127 12:35:33.995377    9948 kapi.go:59] client config for multinode-659000: &rest.Config{Host:"https://172.29.198.106:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-659000/client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-659000/client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x301e420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 12:35:33.996538    9948 cert_rotation.go:140] Starting client certificate rotation controller
	I0127 12:35:34.007906    9948 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 12:35:34.025167    9948 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0127 12:35:34.025222    9948 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0127 12:35:34.025222    9948 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0127 12:35:34.025222    9948 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta4
	I0127 12:35:34.025222    9948 command_runner.go:130] >  kind: InitConfiguration
	I0127 12:35:34.025222    9948 command_runner.go:130] >  localAPIEndpoint:
	I0127 12:35:34.025301    9948 command_runner.go:130] > -  advertiseAddress: 172.29.204.17
	I0127 12:35:34.025301    9948 command_runner.go:130] > +  advertiseAddress: 172.29.198.106
	I0127 12:35:34.025301    9948 command_runner.go:130] >    bindPort: 8443
	I0127 12:35:34.025301    9948 command_runner.go:130] >  bootstrapTokens:
	I0127 12:35:34.025301    9948 command_runner.go:130] >    - groups:
	I0127 12:35:34.025301    9948 command_runner.go:130] > @@ -15,13 +15,13 @@
	I0127 12:35:34.025301    9948 command_runner.go:130] >    name: "multinode-659000"
	I0127 12:35:34.025301    9948 command_runner.go:130] >    kubeletExtraArgs:
	I0127 12:35:34.025301    9948 command_runner.go:130] >      - name: "node-ip"
	I0127 12:35:34.025301    9948 command_runner.go:130] > -      value: "172.29.204.17"
	I0127 12:35:34.025399    9948 command_runner.go:130] > +      value: "172.29.198.106"
	I0127 12:35:34.025399    9948 command_runner.go:130] >    taints: []
	I0127 12:35:34.025399    9948 command_runner.go:130] >  ---
	I0127 12:35:34.025441    9948 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta4
	I0127 12:35:34.025441    9948 command_runner.go:130] >  kind: ClusterConfiguration
	I0127 12:35:34.025441    9948 command_runner.go:130] >  apiServer:
	I0127 12:35:34.025441    9948 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.29.204.17"]
	I0127 12:35:34.025441    9948 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.29.198.106"]
	I0127 12:35:34.025441    9948 command_runner.go:130] >    extraArgs:
	I0127 12:35:34.025495    9948 command_runner.go:130] >      - name: "enable-admission-plugins"
	I0127 12:35:34.025495    9948 command_runner.go:130] >        value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0127 12:35:34.025533    9948 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.29.204.17
	+  advertiseAddress: 172.29.198.106
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -15,13 +15,13 @@
	   name: "multinode-659000"
	   kubeletExtraArgs:
	     - name: "node-ip"
	-      value: "172.29.204.17"
	+      value: "172.29.198.106"
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.29.204.17"]
	+  certSANs: ["127.0.0.1", "localhost", "172.29.198.106"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	       value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	
	-- /stdout --
	I0127 12:35:34.025596    9948 kubeadm.go:1160] stopping kube-system containers ...
	I0127 12:35:34.034084    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0127 12:35:34.065879    9948 command_runner.go:130] > f818dd15d8b0
	I0127 12:35:34.066789    9948 command_runner.go:130] > 134620caeeb9
	I0127 12:35:34.066789    9948 command_runner.go:130] > bc9ef8ee86ec
	I0127 12:35:34.066789    9948 command_runner.go:130] > 4a53e133a1cd
	I0127 12:35:34.066789    9948 command_runner.go:130] > d758000dda95
	I0127 12:35:34.066789    9948 command_runner.go:130] > bbec7ccef7da
	I0127 12:35:34.066789    9948 command_runner.go:130] > f2d0bd65fe50
	I0127 12:35:34.066851    9948 command_runner.go:130] > 319cddeebceb
	I0127 12:35:34.066851    9948 command_runner.go:130] > a16e06a03860
	I0127 12:35:34.066851    9948 command_runner.go:130] > e07a66f8f619
	I0127 12:35:34.066881    9948 command_runner.go:130] > 5f274e5a8851
	I0127 12:35:34.066881    9948 command_runner.go:130] > f91e9c2d3ba6
	I0127 12:35:34.066881    9948 command_runner.go:130] > 1b522c4c9f4c
	I0127 12:35:34.066881    9948 command_runner.go:130] > 51ee4649b24a
	I0127 12:35:34.066881    9948 command_runner.go:130] > 1bd5bf99bede
	I0127 12:35:34.066881    9948 command_runner.go:130] > 5423fc511329
	I0127 12:35:34.066881    9948 docker.go:483] Stopping containers: [f818dd15d8b0 134620caeeb9 bc9ef8ee86ec 4a53e133a1cd d758000dda95 bbec7ccef7da f2d0bd65fe50 319cddeebceb a16e06a03860 e07a66f8f619 5f274e5a8851 f91e9c2d3ba6 1b522c4c9f4c 51ee4649b24a 1bd5bf99bede 5423fc511329]
	I0127 12:35:34.077725    9948 ssh_runner.go:195] Run: docker stop f818dd15d8b0 134620caeeb9 bc9ef8ee86ec 4a53e133a1cd d758000dda95 bbec7ccef7da f2d0bd65fe50 319cddeebceb a16e06a03860 e07a66f8f619 5f274e5a8851 f91e9c2d3ba6 1b522c4c9f4c 51ee4649b24a 1bd5bf99bede 5423fc511329
	I0127 12:35:34.104726    9948 command_runner.go:130] > f818dd15d8b0
	I0127 12:35:34.104726    9948 command_runner.go:130] > 134620caeeb9
	I0127 12:35:34.104726    9948 command_runner.go:130] > bc9ef8ee86ec
	I0127 12:35:34.104726    9948 command_runner.go:130] > 4a53e133a1cd
	I0127 12:35:34.104726    9948 command_runner.go:130] > d758000dda95
	I0127 12:35:34.104726    9948 command_runner.go:130] > bbec7ccef7da
	I0127 12:35:34.104726    9948 command_runner.go:130] > f2d0bd65fe50
	I0127 12:35:34.104726    9948 command_runner.go:130] > 319cddeebceb
	I0127 12:35:34.104726    9948 command_runner.go:130] > a16e06a03860
	I0127 12:35:34.105649    9948 command_runner.go:130] > e07a66f8f619
	I0127 12:35:34.105649    9948 command_runner.go:130] > 5f274e5a8851
	I0127 12:35:34.105649    9948 command_runner.go:130] > f91e9c2d3ba6
	I0127 12:35:34.105649    9948 command_runner.go:130] > 1b522c4c9f4c
	I0127 12:35:34.105649    9948 command_runner.go:130] > 51ee4649b24a
	I0127 12:35:34.105649    9948 command_runner.go:130] > 1bd5bf99bede
	I0127 12:35:34.105726    9948 command_runner.go:130] > 5423fc511329
	I0127 12:35:34.119381    9948 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 12:35:34.168359    9948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:35:34.187564    9948 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0127 12:35:34.187786    9948 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0127 12:35:34.187933    9948 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0127 12:35:34.187933    9948 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:35:34.188220    9948 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:35:34.188220    9948 kubeadm.go:157] found existing configuration files:
	
	I0127 12:35:34.199979    9948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:35:34.216712    9948 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:35:34.218042    9948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:35:34.229551    9948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:35:34.256966    9948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:35:34.272571    9948 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:35:34.272865    9948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:35:34.284645    9948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:35:34.320902    9948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:35:34.338787    9948 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:35:34.339721    9948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:35:34.351390    9948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:35:34.382915    9948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:35:34.409553    9948 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:35:34.410825    9948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:35:34.421087    9948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:35:34.449066    9948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:35:34.466099    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:35:34.777331    9948 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:35:34.777331    9948 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0127 12:35:34.777331    9948 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0127 12:35:34.777331    9948 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 12:35:34.777460    9948 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0127 12:35:34.777460    9948 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0127 12:35:34.777460    9948 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0127 12:35:34.777460    9948 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0127 12:35:34.777460    9948 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0127 12:35:34.777571    9948 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 12:35:34.777571    9948 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 12:35:34.777571    9948 command_runner.go:130] > [certs] Using the existing "sa" key
	I0127 12:35:34.777703    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:35:35.793913    9948 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:35:35.793913    9948 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:35:35.793913    9948 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 12:35:35.793913    9948 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:35:35.793913    9948 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:35:35.793913    9948 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:35:35.793913    9948 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.0161988s)
	I0127 12:35:35.793913    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:35:36.085887    9948 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:35:36.085887    9948 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:35:36.085887    9948 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0127 12:35:36.085887    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:35:36.179991    9948 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:35:36.180081    9948 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:35:36.180081    9948 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:35:36.180081    9948 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:35:36.180150    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:35:36.259906    9948 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:35:36.259906    9948 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:35:36.268905    9948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:36.771952    9948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:37.270661    9948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:37.769361    9948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:38.271519    9948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:38.299541    9948 command_runner.go:130] > 2017
	I0127 12:35:38.299541    9948 api_server.go:72] duration metric: took 2.0396144s to wait for apiserver process to appear ...
	I0127 12:35:38.299541    9948 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:35:38.299541    9948 api_server.go:253] Checking apiserver healthz at https://172.29.198.106:8443/healthz ...
	I0127 12:35:41.371814    9948 api_server.go:279] https://172.29.198.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 12:35:41.371814    9948 api_server.go:103] status: https://172.29.198.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 12:35:41.371947    9948 api_server.go:253] Checking apiserver healthz at https://172.29.198.106:8443/healthz ...
	I0127 12:35:41.403172    9948 api_server.go:279] https://172.29.198.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 12:35:41.403908    9948 api_server.go:103] status: https://172.29.198.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 12:35:41.800314    9948 api_server.go:253] Checking apiserver healthz at https://172.29.198.106:8443/healthz ...
	I0127 12:35:41.810254    9948 api_server.go:279] https://172.29.198.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:35:41.810303    9948 api_server.go:103] status: https://172.29.198.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:35:42.300026    9948 api_server.go:253] Checking apiserver healthz at https://172.29.198.106:8443/healthz ...
	I0127 12:35:42.307320    9948 api_server.go:279] https://172.29.198.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:35:42.307320    9948 api_server.go:103] status: https://172.29.198.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:35:42.801235    9948 api_server.go:253] Checking apiserver healthz at https://172.29.198.106:8443/healthz ...
	I0127 12:35:42.811831    9948 api_server.go:279] https://172.29.198.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:35:42.811831    9948 api_server.go:103] status: https://172.29.198.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:35:43.300245    9948 api_server.go:253] Checking apiserver healthz at https://172.29.198.106:8443/healthz ...
	I0127 12:35:43.308109    9948 api_server.go:279] https://172.29.198.106:8443/healthz returned 200:
	ok
	I0127 12:35:43.309250    9948 round_trippers.go:463] GET https://172.29.198.106:8443/version
	I0127 12:35:43.309250    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:43.309250    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:43.309316    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:43.323759    9948 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0127 12:35:43.323857    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:43.323857    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:43.323857    9948 round_trippers.go:580]     Content-Length: 263
	I0127 12:35:43.323857    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:43 GMT
	I0127 12:35:43.323857    9948 round_trippers.go:580]     Audit-Id: e6b2733b-3baf-477a-b2db-40e5fbda5916
	I0127 12:35:43.323857    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:43.323857    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:43.323857    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:43.324050    9948 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "32",
	  "gitVersion": "v1.32.1",
	  "gitCommit": "e9c9be4007d1664e68796af02b8978640d2c1b26",
	  "gitTreeState": "clean",
	  "buildDate": "2025-01-15T14:31:55Z",
	  "goVersion": "go1.23.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0127 12:35:43.324193    9948 api_server.go:141] control plane version: v1.32.1
	I0127 12:35:43.324250    9948 api_server.go:131] duration metric: took 5.0246562s to wait for apiserver health ...
	I0127 12:35:43.324250    9948 cni.go:84] Creating CNI manager for ""
	I0127 12:35:43.324310    9948 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0127 12:35:43.328300    9948 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0127 12:35:43.343783    9948 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0127 12:35:43.352289    9948 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0127 12:35:43.352289    9948 command_runner.go:130] >   Size: 3103192   	Blocks: 6064       IO Block: 4096   regular file
	I0127 12:35:43.352289    9948 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0127 12:35:43.352289    9948 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0127 12:35:43.352289    9948 command_runner.go:130] > Access: 2025-01-27 12:34:12.535327600 +0000
	I0127 12:35:43.352289    9948 command_runner.go:130] > Modify: 2025-01-14 09:03:58.000000000 +0000
	I0127 12:35:43.352289    9948 command_runner.go:130] > Change: 2025-01-27 12:34:04.059000000 +0000
	I0127 12:35:43.352289    9948 command_runner.go:130] >  Birth: -
	I0127 12:35:43.352528    9948 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0127 12:35:43.352600    9948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0127 12:35:43.447309    9948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0127 12:35:44.622432    9948 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0127 12:35:44.622527    9948 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0127 12:35:44.622527    9948 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0127 12:35:44.622527    9948 command_runner.go:130] > daemonset.apps/kindnet configured
	I0127 12:35:44.622527    9948 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.1752062s)
	I0127 12:35:44.622655    9948 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:35:44.622882    9948 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0127 12:35:44.622882    9948 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0127 12:35:44.623115    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods
	I0127 12:35:44.623115    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:44.623162    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:44.623162    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:44.686897    9948 round_trippers.go:574] Response Status: 200 OK in 63 milliseconds
	I0127 12:35:44.686897    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:44.686897    9948 round_trippers.go:580]     Audit-Id: 0888cc0e-7012-4657-adcf-f78ed48588b5
	I0127 12:35:44.686897    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:44.686897    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:44.686897    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:44.686897    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:44.686897    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:44 GMT
	I0127 12:35:44.693884    9948 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1891"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 91550 chars]
	I0127 12:35:44.701115    9948 system_pods.go:59] 12 kube-system pods found
	I0127 12:35:44.701179    9948 system_pods.go:61] "coredns-668d6bf9bc-2qw6w" [8f0367fc-d842-4cc3-8e71-30869a548243] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 12:35:44.701179    9948 system_pods.go:61] "etcd-multinode-659000" [4c33fa42-51a7-4a7a-a497-cce80b8773d6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 12:35:44.701179    9948 system_pods.go:61] "kindnet-kpfjt" [b00e6ead-b072-40b5-9c87-7697316d8107] Running
	I0127 12:35:44.701179    9948 system_pods.go:61] "kindnet-n7vjl" [23617db6-b970-4ead-845b-69776d50ffef] Running
	I0127 12:35:44.701308    9948 system_pods.go:61] "kindnet-z2hqq" [9b617a9c-e2b8-45fd-bee2-45cb03d4cd42] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0127 12:35:44.701308    9948 system_pods.go:61] "kube-apiserver-multinode-659000" [8fbee94f-fd8f-4431-bd9f-b75d49cb19d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 12:35:44.701308    9948 system_pods.go:61] "kube-controller-manager-multinode-659000" [8be02f36-161c-44f3-b526-56db3b8a007a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 12:35:44.701308    9948 system_pods.go:61] "kube-proxy-pjhc8" [ddb6698c-b83d-4a49-9672-c894e87cbb66] Running
	I0127 12:35:44.701308    9948 system_pods.go:61] "kube-proxy-s46mv" [ae3b8daf-d674-4cfe-8652-cb5ff6ba8615] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 12:35:44.701308    9948 system_pods.go:61] "kube-proxy-sk5js" [ba679e1d-713c-4bd4-b267-2b887c1ac4df] Running
	I0127 12:35:44.701308    9948 system_pods.go:61] "kube-scheduler-multinode-659000" [52b91964-a331-4925-9e07-c8df32b4176d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 12:35:44.701308    9948 system_pods.go:61] "storage-provisioner" [bcfd7913-1bc0-4c24-882f-2be92ec9b046] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 12:35:44.701308    9948 system_pods.go:74] duration metric: took 78.5775ms to wait for pod list to return data ...
	I0127 12:35:44.701308    9948 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:35:44.701308    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes
	I0127 12:35:44.701308    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:44.701308    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:44.701308    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:44.779677    9948 round_trippers.go:574] Response Status: 200 OK in 78 milliseconds
	I0127 12:35:44.779818    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:44.779884    9948 round_trippers.go:580]     Audit-Id: 9eed8ae3-6e78-4019-8c87-04d758d98dbb
	I0127 12:35:44.779884    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:44.779884    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:44.779884    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:44.779884    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:44.779884    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:44 GMT
	I0127 12:35:44.780081    9948 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1892"},"items":[{"metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1813","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15631 chars]
	I0127 12:35:44.781830    9948 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:35:44.781830    9948 node_conditions.go:123] node cpu capacity is 2
	I0127 12:35:44.781830    9948 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:35:44.781830    9948 node_conditions.go:123] node cpu capacity is 2
	I0127 12:35:44.781830    9948 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:35:44.781830    9948 node_conditions.go:123] node cpu capacity is 2
	I0127 12:35:44.781830    9948 node_conditions.go:105] duration metric: took 80.5203ms to run NodePressure ...
	I0127 12:35:44.781830    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:35:45.349385    9948 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0127 12:35:45.349385    9948 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0127 12:35:45.349385    9948 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 12:35:45.349385    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0127 12:35:45.349385    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:45.349385    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:45.349385    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:45.361302    9948 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0127 12:35:45.361383    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:45.361406    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:45.361406    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:45.361406    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:45.361406    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:45.361406    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:45 GMT
	I0127 12:35:45.361406    9948 round_trippers.go:580]     Audit-Id: 889af32d-71d8-434c-a98e-d987fbb0f3ff
	I0127 12:35:45.361487    9948 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1916"},"items":[{"metadata":{"name":"etcd-multinode-659000","namespace":"kube-system","uid":"4c33fa42-51a7-4a7a-a497-cce80b8773d6","resourceVersion":"1864","creationTimestamp":"2025-01-27T12:35:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.29.198.106:2379","kubernetes.io/config.hash":"575cefa3aa8017dce576fa244e719a4e","kubernetes.io/config.mirror":"575cefa3aa8017dce576fa244e719a4e","kubernetes.io/config.seen":"2025-01-27T12:35:36.285837685Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:35:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 31716 chars]
	I0127 12:35:45.363020    9948 kubeadm.go:739] kubelet initialised
	I0127 12:35:45.363020    9948 kubeadm.go:740] duration metric: took 13.6343ms waiting for restarted kubelet to initialise ...
	I0127 12:35:45.363544    9948 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:35:45.363706    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods
	I0127 12:35:45.363727    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:45.363768    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:45.363768    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:45.368502    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:45.368502    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:45.368502    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:45 GMT
	I0127 12:35:45.368502    9948 round_trippers.go:580]     Audit-Id: dadc4cc3-64a9-4610-9a1f-ea232d5aa1c0
	I0127 12:35:45.368502    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:45.368502    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:45.368502    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:45.368502    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:45.370527    9948 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1916"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90957 chars]
	I0127 12:35:45.373525    9948 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:45.373525    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:35:45.373525    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:45.373525    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:45.373525    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:45.376514    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:35:45.376514    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:45.376514    9948 round_trippers.go:580]     Audit-Id: b0f7f9e2-cb38-4be6-b4e7-6a0f4fbb5651
	I0127 12:35:45.376514    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:45.376514    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:45.376514    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:45.376514    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:45.376514    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:45 GMT
	I0127 12:35:45.376514    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:35:45.377530    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:45.377530    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:45.377530    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:45.377530    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:45.381523    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:35:45.381523    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:45.381523    9948 round_trippers.go:580]     Audit-Id: fbf77371-a0a9-4c29-a553-0ef80275ac50
	I0127 12:35:45.381523    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:45.381523    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:45.381523    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:45.381523    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:45.381523    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:45 GMT
	I0127 12:35:45.381523    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:45.382516    9948 pod_ready.go:98] node "multinode-659000" hosting pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000" has status "Ready":"False"
	I0127 12:35:45.382516    9948 pod_ready.go:82] duration metric: took 8.991ms for pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace to be "Ready" ...
	E0127 12:35:45.382516    9948 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-659000" hosting pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000" has status "Ready":"False"
	I0127 12:35:45.382516    9948 pod_ready.go:79] waiting up to 4m0s for pod "etcd-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:45.382516    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-659000
	I0127 12:35:45.382516    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:45.382516    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:45.382516    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:45.385530    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:35:45.385530    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:45.385530    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:45.385530    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:45.385530    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:45 GMT
	I0127 12:35:45.385530    9948 round_trippers.go:580]     Audit-Id: 924fc52f-715d-406c-8d55-d13ff08e9907
	I0127 12:35:45.385530    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:45.385530    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:45.385530    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-659000","namespace":"kube-system","uid":"4c33fa42-51a7-4a7a-a497-cce80b8773d6","resourceVersion":"1864","creationTimestamp":"2025-01-27T12:35:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.29.198.106:2379","kubernetes.io/config.hash":"575cefa3aa8017dce576fa244e719a4e","kubernetes.io/config.mirror":"575cefa3aa8017dce576fa244e719a4e","kubernetes.io/config.seen":"2025-01-27T12:35:36.285837685Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:35:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6841 chars]
	I0127 12:35:45.385530    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:45.386526    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:45.386526    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:45.386526    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:45.388532    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:35:45.388532    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:45.388532    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:45.388532    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:45.388532    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:45.388532    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:45 GMT
	I0127 12:35:45.388532    9948 round_trippers.go:580]     Audit-Id: c6bdf9b9-e6c2-4dc2-b522-9096f82ded4f
	I0127 12:35:45.388532    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:45.388532    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:45.388532    9948 pod_ready.go:98] node "multinode-659000" hosting pod "etcd-multinode-659000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000" has status "Ready":"False"
	I0127 12:35:45.388532    9948 pod_ready.go:82] duration metric: took 6.0155ms for pod "etcd-multinode-659000" in "kube-system" namespace to be "Ready" ...
	E0127 12:35:45.388532    9948 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-659000" hosting pod "etcd-multinode-659000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000" has status "Ready":"False"
	I0127 12:35:45.388532    9948 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:45.388532    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-659000
	I0127 12:35:45.388532    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:45.388532    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:45.388532    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:45.392518    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:35:45.393105    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:45.393105    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:45.393105    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:45.393105    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:45 GMT
	I0127 12:35:45.393105    9948 round_trippers.go:580]     Audit-Id: 64fe31a7-8b9b-4130-8425-4e54162300e5
	I0127 12:35:45.393186    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:45.393186    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:45.393314    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-659000","namespace":"kube-system","uid":"8fbee94f-fd8f-4431-bd9f-b75d49cb19d4","resourceVersion":"1865","creationTimestamp":"2025-01-27T12:35:42Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.29.198.106:8443","kubernetes.io/config.hash":"b9fbd177058ba298cde2a92c4ef5c601","kubernetes.io/config.mirror":"b9fbd177058ba298cde2a92c4ef5c601","kubernetes.io/config.seen":"2025-01-27T12:35:36.265565317Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:35:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8293 chars]
	I0127 12:35:45.394150    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:45.394205    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:45.394205    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:45.394205    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:45.396899    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:35:45.396899    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:45.396899    9948 round_trippers.go:580]     Audit-Id: 91e9fd33-b24b-4878-9a12-02ed1f23a99f
	I0127 12:35:45.396899    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:45.396899    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:45.396899    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:45.396899    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:45.396899    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:45 GMT
	I0127 12:35:45.396899    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:45.397791    9948 pod_ready.go:98] node "multinode-659000" hosting pod "kube-apiserver-multinode-659000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000" has status "Ready":"False"
	I0127 12:35:45.397859    9948 pod_ready.go:82] duration metric: took 9.3272ms for pod "kube-apiserver-multinode-659000" in "kube-system" namespace to be "Ready" ...
	E0127 12:35:45.397859    9948 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-659000" hosting pod "kube-apiserver-multinode-659000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000" has status "Ready":"False"
	I0127 12:35:45.397916    9948 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:45.398061    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-659000
	I0127 12:35:45.398076    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:45.398076    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:45.398076    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:45.405836    9948 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 12:35:45.406308    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:45.406308    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:45.406308    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:45.406308    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:45 GMT
	I0127 12:35:45.406373    9948 round_trippers.go:580]     Audit-Id: 1db62ae8-7e70-4a97-8c92-de9d8c0020d8
	I0127 12:35:45.406373    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:45.406373    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:45.406404    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-659000","namespace":"kube-system","uid":"8be02f36-161c-44f3-b526-56db3b8a007a","resourceVersion":"1860","creationTimestamp":"2025-01-27T12:11:59Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4a14d0700eafa36dd3913955f2c0f839","kubernetes.io/config.mirror":"4a14d0700eafa36dd3913955f2c0f839","kubernetes.io/config.seen":"2025-01-27T12:11:59.106472767Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:11:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0127 12:35:45.407449    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:45.407514    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:45.407514    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:45.407514    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:45.410044    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:35:45.410113    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:45.410130    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:45 GMT
	I0127 12:35:45.410130    9948 round_trippers.go:580]     Audit-Id: cfe352dd-face-4d67-b055-afb228e5515b
	I0127 12:35:45.410130    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:45.410130    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:45.410130    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:45.410130    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:45.410396    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:45.410396    9948 pod_ready.go:98] node "multinode-659000" hosting pod "kube-controller-manager-multinode-659000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000" has status "Ready":"False"
	I0127 12:35:45.410396    9948 pod_ready.go:82] duration metric: took 12.4288ms for pod "kube-controller-manager-multinode-659000" in "kube-system" namespace to be "Ready" ...
	E0127 12:35:45.410396    9948 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-659000" hosting pod "kube-controller-manager-multinode-659000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000" has status "Ready":"False"
	I0127 12:35:45.411150    9948 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-pjhc8" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:45.550275    9948 request.go:632] Waited for 139.1229ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pjhc8
	I0127 12:35:45.550653    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pjhc8
	I0127 12:35:45.550689    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:45.550689    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:45.550734    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:45.554184    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:35:45.554276    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:45.554276    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:45.554276    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:45.554276    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:45 GMT
	I0127 12:35:45.554276    9948 round_trippers.go:580]     Audit-Id: 6abb2687-6d94-44fb-9ad9-c29c2e602707
	I0127 12:35:45.554276    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:45.554276    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:45.554606    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pjhc8","generateName":"kube-proxy-","namespace":"kube-system","uid":"ddb6698c-b83d-4a49-9672-c894e87cbb66","resourceVersion":"626","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d88eb776-b464-4f2b-8140-44249610a7fa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d88eb776-b464-4f2b-8140-44249610a7fa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6207 chars]
	I0127 12:35:45.750312    9948 request.go:632] Waited for 195.5144ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.198.106:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:35:45.750312    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:35:45.750312    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:45.750312    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:45.750312    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:45.750312    9948 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0127 12:35:45.750312    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:45.750312    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:45.750312    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:45 GMT
	I0127 12:35:45.750312    9948 round_trippers.go:580]     Audit-Id: 344a35cd-63ed-4749-9075-1e32d1280e98
	I0127 12:35:45.750312    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:45.750312    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:45.750312    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:45.750312    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"1482","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3828 chars]
	I0127 12:35:45.750312    9948 pod_ready.go:93] pod "kube-proxy-pjhc8" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:45.750312    9948 pod_ready.go:82] duration metric: took 339.1583ms for pod "kube-proxy-pjhc8" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:45.750312    9948 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-s46mv" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:45.954768    9948 request.go:632] Waited for 204.4542ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s46mv
	I0127 12:35:45.955070    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s46mv
	I0127 12:35:45.955070    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:45.955070    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:45.955070    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:45.958848    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:35:45.958848    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:45.958848    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:45.958848    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:45.958848    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:45.958848    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:45 GMT
	I0127 12:35:45.958991    9948 round_trippers.go:580]     Audit-Id: 6620372a-f334-4139-8de8-80e58730afab
	I0127 12:35:45.958991    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:45.959373    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s46mv","generateName":"kube-proxy-","namespace":"kube-system","uid":"ae3b8daf-d674-4cfe-8652-cb5ff6ba8615","resourceVersion":"1898","creationTimestamp":"2025-01-27T12:12:03Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d88eb776-b464-4f2b-8140-44249610a7fa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d88eb776-b464-4f2b-8140-44249610a7fa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6405 chars]
	I0127 12:35:46.150608    9948 request.go:632] Waited for 190.3966ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:46.150608    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:46.150608    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:46.150608    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:46.150608    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:46.155647    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:35:46.155713    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:46.155713    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:46.155713    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:46 GMT
	I0127 12:35:46.155713    9948 round_trippers.go:580]     Audit-Id: e1ea1884-aebc-4345-acd4-b6e046d869be
	I0127 12:35:46.155771    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:46.155771    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:46.155771    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:46.156114    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:46.157022    9948 pod_ready.go:98] node "multinode-659000" hosting pod "kube-proxy-s46mv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000" has status "Ready":"False"
	I0127 12:35:46.157022    9948 pod_ready.go:82] duration metric: took 406.7054ms for pod "kube-proxy-s46mv" in "kube-system" namespace to be "Ready" ...
	E0127 12:35:46.157128    9948 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-659000" hosting pod "kube-proxy-s46mv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000" has status "Ready":"False"
	I0127 12:35:46.157128    9948 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-sk5js" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:46.349932    9948 request.go:632] Waited for 192.5365ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sk5js
	I0127 12:35:46.349932    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sk5js
	I0127 12:35:46.349932    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:46.349932    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:46.349932    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:46.354742    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:46.354742    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:46.354742    9948 round_trippers.go:580]     Audit-Id: f16df18b-c2d8-4639-9942-d4bfdad6529b
	I0127 12:35:46.354742    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:46.354742    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:46.354742    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:46.354742    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:46.354742    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:46 GMT
	I0127 12:35:46.354742    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sk5js","generateName":"kube-proxy-","namespace":"kube-system","uid":"ba679e1d-713c-4bd4-b267-2b887c1ac4df","resourceVersion":"1793","creationTimestamp":"2025-01-27T12:19:54Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d88eb776-b464-4f2b-8140-44249610a7fa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:19:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d88eb776-b464-4f2b-8140-44249610a7fa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6428 chars]
	I0127 12:35:46.549498    9948 request.go:632] Waited for 193.3007ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.198.106:8443/api/v1/nodes/multinode-659000-m03
	I0127 12:35:46.549978    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000-m03
	I0127 12:35:46.550092    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:46.550143    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:46.550173    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:46.554344    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:46.554471    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:46.554471    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:46.554471    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:46 GMT
	I0127 12:35:46.554471    9948 round_trippers.go:580]     Audit-Id: e35ee77c-59b2-4228-b7f0-5050d7835f01
	I0127 12:35:46.554471    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:46.554471    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:46.554471    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:46.554696    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m03","uid":"0516f5fa-16ad-40aa-9616-01d098e46466","resourceVersion":"1895","creationTimestamp":"2025-01-27T12:31:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_31_04_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:31:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4303 chars]
	I0127 12:35:46.554899    9948 pod_ready.go:98] node "multinode-659000-m03" hosting pod "kube-proxy-sk5js" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000-m03" has status "Ready":"Unknown"
	I0127 12:35:46.554899    9948 pod_ready.go:82] duration metric: took 397.7673ms for pod "kube-proxy-sk5js" in "kube-system" namespace to be "Ready" ...
	E0127 12:35:46.554899    9948 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-659000-m03" hosting pod "kube-proxy-sk5js" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000-m03" has status "Ready":"Unknown"
	I0127 12:35:46.554899    9948 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:46.749489    9948 request.go:632] Waited for 194.0588ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-659000
	I0127 12:35:46.749489    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-659000
	I0127 12:35:46.749489    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:46.749489    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:46.749489    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:46.754326    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:46.754326    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:46.754326    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:46 GMT
	I0127 12:35:46.754326    9948 round_trippers.go:580]     Audit-Id: f9d03568-2030-408b-b7e0-db35a0757255
	I0127 12:35:46.754326    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:46.754326    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:46.754326    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:46.754326    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:46.754326    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-659000","namespace":"kube-system","uid":"52b91964-a331-4925-9e07-c8df32b4176d","resourceVersion":"1862","creationTimestamp":"2025-01-27T12:11:58Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e6c90fc43fa6c0754218ff1c4162045d","kubernetes.io/config.mirror":"e6c90fc43fa6c0754218ff1c4162045d","kubernetes.io/config.seen":"2025-01-27T12:11:51.419790825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:11:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5812 chars]
	I0127 12:35:46.949752    9948 request.go:632] Waited for 194.3032ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:46.949752    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:46.949752    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:46.949752    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:46.949752    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:46.953238    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:35:46.954115    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:46.954115    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:46.954115    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:46.954115    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:46 GMT
	I0127 12:35:46.954115    9948 round_trippers.go:580]     Audit-Id: 1229297d-0ae3-4415-bc24-02245312e592
	I0127 12:35:46.954115    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:46.954115    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:46.954553    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:46.954935    9948 pod_ready.go:98] node "multinode-659000" hosting pod "kube-scheduler-multinode-659000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000" has status "Ready":"False"
	I0127 12:35:46.955049    9948 pod_ready.go:82] duration metric: took 400.146ms for pod "kube-scheduler-multinode-659000" in "kube-system" namespace to be "Ready" ...
	E0127 12:35:46.955049    9948 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-659000" hosting pod "kube-scheduler-multinode-659000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000" has status "Ready":"False"
	I0127 12:35:46.955049    9948 pod_ready.go:39] duration metric: took 1.5914889s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:35:46.955049    9948 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 12:35:46.974171    9948 command_runner.go:130] > -16
	I0127 12:35:46.974171    9948 ops.go:34] apiserver oom_adj: -16
	I0127 12:35:46.974171    9948 kubeadm.go:597] duration metric: took 13.029694s to restartPrimaryControlPlane
	I0127 12:35:46.974352    9948 kubeadm.go:394] duration metric: took 13.0914766s to StartCluster
	I0127 12:35:46.974352    9948 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:35:46.974539    9948 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0127 12:35:46.976373    9948 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:35:46.978521    9948 start.go:235] Will wait 6m0s for node &{Name: IP:172.29.198.106 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 12:35:46.978521    9948 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 12:35:46.979304    9948 config.go:182] Loaded profile config "multinode-659000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:35:46.981962    9948 out.go:177] * Enabled addons: 
	I0127 12:35:46.983988    9948 out.go:177] * Verifying Kubernetes components...
	I0127 12:35:46.990038    9948 addons.go:514] duration metric: took 11.5167ms for enable addons: enabled=[]
	I0127 12:35:47.006335    9948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:35:47.275775    9948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:35:47.301208    9948 node_ready.go:35] waiting up to 6m0s for node "multinode-659000" to be "Ready" ...
	I0127 12:35:47.301439    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:47.301474    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:47.301474    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:47.301474    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:47.308418    9948 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:35:47.308418    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:47.308418    9948 round_trippers.go:580]     Audit-Id: 36f9e360-021a-4313-9fc3-519d46bbe416
	I0127 12:35:47.308418    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:47.308418    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:47.308418    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:47.308418    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:47.308418    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:47 GMT
	I0127 12:35:47.309064    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:47.801954    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:47.801954    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:47.801954    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:47.801954    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:47.805038    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:35:47.805108    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:47.805108    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:47.805108    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:47.805108    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:47.805108    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:47 GMT
	I0127 12:35:47.805167    9948 round_trippers.go:580]     Audit-Id: 30e51e57-2d57-49f4-8aaa-996ae7dc9801
	I0127 12:35:47.805167    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:47.805499    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:48.301588    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:48.301588    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:48.301588    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:48.301588    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:48.306532    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:48.306532    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:48.306532    9948 round_trippers.go:580]     Audit-Id: 1d241cdd-b2cc-40fa-a217-e3f0106e18b1
	I0127 12:35:48.306716    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:48.306716    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:48.306768    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:48.306768    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:48.306805    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:48 GMT
	I0127 12:35:48.307176    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:48.801713    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:48.801713    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:48.801713    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:48.801713    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:48.804720    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:35:48.804720    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:48.804720    9948 round_trippers.go:580]     Audit-Id: 79445d3b-cf3a-4375-8f8a-24844786a835
	I0127 12:35:48.804720    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:48.804720    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:48.804720    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:48.804720    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:48.804720    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:48 GMT
	I0127 12:35:48.805872    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:49.302141    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:49.302141    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:49.302141    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:49.302141    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:49.306596    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:49.307320    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:49.307320    9948 round_trippers.go:580]     Audit-Id: e7787a8c-3bd2-4d28-9ebd-b4dc25085a20
	I0127 12:35:49.307320    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:49.307320    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:49.307320    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:49.307320    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:49.307320    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:49 GMT
	I0127 12:35:49.307791    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:49.308361    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:35:49.801327    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:49.801327    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:49.801327    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:49.801327    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:49.805682    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:49.805788    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:49.805788    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:49.805788    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:49.805788    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:49 GMT
	I0127 12:35:49.805867    9948 round_trippers.go:580]     Audit-Id: d70d3060-48f1-4777-b3b6-e891f3efb479
	I0127 12:35:49.805867    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:49.805867    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:49.806252    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:50.301397    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:50.301397    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:50.301397    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:50.301397    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:50.306588    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:35:50.306588    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:50.306588    9948 round_trippers.go:580]     Audit-Id: ff352193-065f-4a51-b045-aa96c204d770
	I0127 12:35:50.306588    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:50.306588    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:50.306588    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:50.306588    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:50.306588    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:50 GMT
	I0127 12:35:50.307025    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:50.802132    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:50.802132    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:50.802132    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:50.802132    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:50.806350    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:50.806460    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:50.806460    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:50 GMT
	I0127 12:35:50.806460    9948 round_trippers.go:580]     Audit-Id: 87103fb1-ed34-468e-8812-b0acf460fe60
	I0127 12:35:50.806460    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:50.806546    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:50.806546    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:50.806546    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:50.806909    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:51.301795    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:51.301795    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:51.301795    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:51.301795    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:51.307328    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:35:51.307419    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:51.307419    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:51.307419    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:51.307510    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:51 GMT
	I0127 12:35:51.307510    9948 round_trippers.go:580]     Audit-Id: 86bb1816-8895-4cc6-9f39-2f92f390dc54
	I0127 12:35:51.307510    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:51.307510    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:51.307789    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:51.308487    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:35:51.801848    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:51.801960    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:51.801960    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:51.801960    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:51.808641    9948 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:35:51.808641    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:51.808641    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:51.808641    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:51 GMT
	I0127 12:35:51.808641    9948 round_trippers.go:580]     Audit-Id: 6582cb10-6afd-4ef4-83f8-be93bf836294
	I0127 12:35:51.808641    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:51.808641    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:51.808641    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:51.809373    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:52.301978    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:52.301978    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:52.302083    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:52.302083    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:52.306904    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:52.306904    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:52.307025    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:52.307025    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:52 GMT
	I0127 12:35:52.307025    9948 round_trippers.go:580]     Audit-Id: 0376b027-aef3-4f71-b932-6b82b572adaa
	I0127 12:35:52.307025    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:52.307025    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:52.307025    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:52.308064    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:52.801486    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:52.801486    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:52.801486    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:52.801486    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:52.806054    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:52.806054    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:52.806054    9948 round_trippers.go:580]     Audit-Id: c25c342c-6a13-4864-b4b4-124b54c50e02
	I0127 12:35:52.806054    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:52.806054    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:52.806170    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:52.806170    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:52.806170    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:52 GMT
	I0127 12:35:52.806703    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:53.301626    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:53.301626    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:53.301626    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:53.301626    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:53.305819    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:53.306674    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:53.306674    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:53.306674    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:53.306674    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:53.306747    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:53.306747    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:53 GMT
	I0127 12:35:53.306747    9948 round_trippers.go:580]     Audit-Id: 79527147-335f-4c85-961d-9af5c797b5f9
	I0127 12:35:53.307003    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:53.801570    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:53.801570    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:53.801570    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:53.801570    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:53.806616    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:35:53.806718    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:53.806718    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:53.806718    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:53.806718    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:53.806718    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:53 GMT
	I0127 12:35:53.806718    9948 round_trippers.go:580]     Audit-Id: e5af9b4d-0a2b-467f-9b30-4154a06cb3b3
	I0127 12:35:53.806718    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:53.807354    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:53.808028    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:35:54.301636    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:54.301636    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:54.301636    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:54.301636    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:54.307046    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:35:54.307046    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:54.307046    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:54 GMT
	I0127 12:35:54.307046    9948 round_trippers.go:580]     Audit-Id: 49e45828-a9e3-45f4-af06-f03cc8beaa7b
	I0127 12:35:54.307046    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:54.307046    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:54.307046    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:54.307046    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:54.307934    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:54.802073    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:54.802073    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:54.802197    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:54.802197    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:54.806091    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:35:54.806221    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:54.806221    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:54.806221    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:54.806274    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:54.806274    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:54.806274    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:54 GMT
	I0127 12:35:54.806274    9948 round_trippers.go:580]     Audit-Id: 34597499-8ff1-4310-beb5-7d428276851a
	I0127 12:35:54.806274    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:55.302465    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:55.302567    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:55.302567    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:55.302567    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:55.308084    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:35:55.308084    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:55.308084    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:55 GMT
	I0127 12:35:55.308084    9948 round_trippers.go:580]     Audit-Id: 93d49b61-09ce-41fc-842c-926a7eac715c
	I0127 12:35:55.308084    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:55.308192    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:55.308192    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:55.308192    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:55.308422    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:55.801620    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:55.802194    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:55.802194    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:55.802194    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:55.806794    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:55.807361    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:55.807361    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:55 GMT
	I0127 12:35:55.807361    9948 round_trippers.go:580]     Audit-Id: dd9b87a4-8a74-4061-9973-e41d1f72df58
	I0127 12:35:55.807361    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:55.807361    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:55.807361    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:55.807361    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:55.807699    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:56.302673    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:56.302673    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:56.302673    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:56.302673    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:56.306892    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:56.306892    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:56.306999    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:56.306999    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:56.306999    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:56 GMT
	I0127 12:35:56.307032    9948 round_trippers.go:580]     Audit-Id: af2120f8-a0ef-4f1b-ba51-156eb95fa991
	I0127 12:35:56.307032    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:56.307032    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:56.307065    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:56.307686    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:35:56.801426    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:56.801426    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:56.801426    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:56.801426    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:56.805434    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:56.805434    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:56.805434    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:56.805434    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:56.805434    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:56.805434    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:56.805434    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:56 GMT
	I0127 12:35:56.805434    9948 round_trippers.go:580]     Audit-Id: 7120e14b-84f6-42d4-b4b3-4a453569483d
	I0127 12:35:56.805434    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:57.302348    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:57.302348    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:57.302467    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:57.302467    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:57.310152    9948 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 12:35:57.310181    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:57.310181    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:57.310181    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:57.310181    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:57.310271    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:57 GMT
	I0127 12:35:57.310271    9948 round_trippers.go:580]     Audit-Id: f7b94c7a-4684-4852-92dd-c334c3237005
	I0127 12:35:57.310271    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:57.311115    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:57.801639    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:57.801639    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:57.801639    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:57.801639    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:57.808000    9948 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:35:57.808726    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:57.808726    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:57 GMT
	I0127 12:35:57.808726    9948 round_trippers.go:580]     Audit-Id: 13d61e92-67a9-4701-ad50-2e94c15e8331
	I0127 12:35:57.808726    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:57.808726    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:57.808726    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:57.808774    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:57.809458    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:58.301935    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:58.301935    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:58.301935    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:58.301935    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:58.309691    9948 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 12:35:58.309691    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:58.309691    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:58.309691    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:58 GMT
	I0127 12:35:58.309691    9948 round_trippers.go:580]     Audit-Id: d99ab032-a5a5-40f7-9cc5-4971d572177f
	I0127 12:35:58.309691    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:58.309691    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:58.309875    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:58.309986    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:58.310851    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:35:58.802180    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:58.802180    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:58.802180    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:58.802180    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:58.806922    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:58.806922    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:58.806922    9948 round_trippers.go:580]     Audit-Id: 4483bb59-13a1-493e-8017-205f017898b7
	I0127 12:35:58.806922    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:58.806922    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:58.806922    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:58.806922    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:58.806922    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:58 GMT
	I0127 12:35:58.808310    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:59.302120    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:59.302120    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:59.302120    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:59.302120    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:59.306432    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:59.307084    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:59.307084    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:59.307084    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:59.307084    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:59 GMT
	I0127 12:35:59.307084    9948 round_trippers.go:580]     Audit-Id: 28390c7d-c3e2-4f19-9a3c-5c0f82fa4169
	I0127 12:35:59.307084    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:59.307084    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:59.307432    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:59.802177    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:59.802177    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:59.802177    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:59.802177    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:59.806775    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:59.806775    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:59.807200    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:59.807200    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:59 GMT
	I0127 12:35:59.807200    9948 round_trippers.go:580]     Audit-Id: c592dd70-08ea-478b-8b6c-048056a610d7
	I0127 12:35:59.807200    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:59.807200    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:59.807200    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:59.807592    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:00.301394    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:00.301394    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:00.301394    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:00.301394    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:00.306361    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:00.306426    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:00.306426    9948 round_trippers.go:580]     Audit-Id: 262dde43-dda4-4826-aa43-36de1afc877a
	I0127 12:36:00.306426    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:00.306426    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:00.306487    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:00.306487    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:00.306487    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:00 GMT
	I0127 12:36:00.306676    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:00.802794    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:00.802794    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:00.802962    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:00.802962    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:00.807128    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:00.807128    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:00.807128    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:00.807128    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:00.807128    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:00.807128    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:00 GMT
	I0127 12:36:00.807128    9948 round_trippers.go:580]     Audit-Id: 0ae6b675-41fb-42f3-a026-8f54dc6d6141
	I0127 12:36:00.807128    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:00.807919    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:00.808510    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:36:01.302478    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:01.302551    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:01.302551    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:01.302551    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:01.306345    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:01.307277    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:01.307277    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:01.307277    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:01.307277    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:01 GMT
	I0127 12:36:01.307277    9948 round_trippers.go:580]     Audit-Id: 7224d9ba-5aa3-4833-a81e-9649baae8fb4
	I0127 12:36:01.307381    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:01.307381    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:01.307381    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:01.802376    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:01.802376    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:01.802376    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:01.802376    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:01.806436    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:01.806436    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:01.806493    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:01.806493    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:01.806493    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:01.806493    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:01 GMT
	I0127 12:36:01.806493    9948 round_trippers.go:580]     Audit-Id: 0d9da99c-2904-4f4f-84ec-3730a00d79fe
	I0127 12:36:01.806493    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:01.807287    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:02.301603    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:02.301603    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:02.301603    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:02.301603    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:02.305179    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:02.305179    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:02.305395    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:02.305395    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:02.305395    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:02.305395    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:02 GMT
	I0127 12:36:02.305395    9948 round_trippers.go:580]     Audit-Id: 7a507d30-eae2-493c-a3d5-613cf8553d6e
	I0127 12:36:02.305395    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:02.305623    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:02.802639    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:02.802731    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:02.802731    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:02.802731    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:02.808465    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:02.808492    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:02.808492    9948 round_trippers.go:580]     Audit-Id: 1f04222b-76f3-44e0-900e-ac6918d3e378
	I0127 12:36:02.808492    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:02.808492    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:02.808541    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:02.808541    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:02.808541    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:02 GMT
	I0127 12:36:02.810083    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:02.810486    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:36:03.301967    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:03.301967    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:03.301967    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:03.301967    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:03.306638    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:03.306638    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:03.306638    9948 round_trippers.go:580]     Audit-Id: 9e94bbdb-a993-40f2-99b3-761e59a2d333
	I0127 12:36:03.306638    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:03.306638    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:03.306638    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:03.306638    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:03.306638    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:03 GMT
	I0127 12:36:03.306978    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:03.801941    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:03.802005    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:03.802005    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:03.802005    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:03.806897    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:03.807004    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:03.807004    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:03.807004    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:03.807004    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:03.807004    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:03.807004    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:03 GMT
	I0127 12:36:03.807004    9948 round_trippers.go:580]     Audit-Id: d1f1551a-35b5-4082-b8fa-7e3e05edc0b8
	I0127 12:36:03.807275    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:04.302050    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:04.302050    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:04.302050    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:04.302050    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:04.307985    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:04.308118    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:04.308118    9948 round_trippers.go:580]     Audit-Id: 81173ab7-8afd-471f-898a-bf9ade4902b2
	I0127 12:36:04.308118    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:04.308118    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:04.308118    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:04.308118    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:04.308118    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:04 GMT
	I0127 12:36:04.308196    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:04.801902    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:04.801902    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:04.801902    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:04.801902    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:04.807155    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:04.807155    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:04.807155    9948 round_trippers.go:580]     Audit-Id: 349a1595-f1d4-4315-9ffb-4a65b00557b1
	I0127 12:36:04.807155    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:04.807155    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:04.807155    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:04.807155    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:04.807262    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:04 GMT
	I0127 12:36:04.807679    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:05.302030    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:05.302030    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:05.302030    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:05.302030    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:05.306743    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:05.306743    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:05.306743    9948 round_trippers.go:580]     Audit-Id: 31444957-8e84-496f-ad90-8f51aea870f7
	I0127 12:36:05.306743    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:05.306743    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:05.306743    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:05.306957    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:05.306957    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:05 GMT
	I0127 12:36:05.307246    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:05.307690    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:36:05.802683    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:05.802683    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:05.802683    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:05.802817    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:05.807163    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:05.807163    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:05.807163    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:05.807163    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:05 GMT
	I0127 12:36:05.807163    9948 round_trippers.go:580]     Audit-Id: b78b8e0c-5b64-48bf-98e4-89a9298d378c
	I0127 12:36:05.807163    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:05.807163    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:05.807163    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:05.807163    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:06.302723    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:06.302753    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:06.302818    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:06.302847    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:06.307621    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:06.307708    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:06.307708    9948 round_trippers.go:580]     Audit-Id: 501cde61-d8b3-4f85-b17a-7fec455c4a59
	I0127 12:36:06.307708    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:06.307708    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:06.307784    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:06.307784    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:06.307784    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:06 GMT
	I0127 12:36:06.308036    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:06.802987    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:06.802987    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:06.802987    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:06.802987    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:06.808013    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:06.808013    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:06.808013    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:06.808153    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:06.808153    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:06 GMT
	I0127 12:36:06.808153    9948 round_trippers.go:580]     Audit-Id: 3d70d454-a8de-49ff-a85a-7b5369e73188
	I0127 12:36:06.808153    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:06.808153    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:06.808466    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:07.302240    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:07.302240    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:07.302240    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:07.302240    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:07.307267    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:07.307313    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:07.307313    9948 round_trippers.go:580]     Audit-Id: b97334de-dbe2-4fc4-bc45-175918d6ff31
	I0127 12:36:07.307363    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:07.307363    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:07.307363    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:07.307363    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:07.307363    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:07 GMT
	I0127 12:36:07.307537    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:07.802403    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:07.802403    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:07.802403    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:07.802403    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:07.806873    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:07.806905    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:07.806905    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:07.806905    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:07 GMT
	I0127 12:36:07.806905    9948 round_trippers.go:580]     Audit-Id: cb80eb35-b5f1-401d-b1bd-9007c3be701d
	I0127 12:36:07.806905    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:07.806905    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:07.806905    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:07.807305    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:07.807305    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:36:08.301529    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:08.301529    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:08.301529    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:08.301529    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:08.306666    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:08.306747    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:08.306747    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:08.306747    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:08.306747    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:08 GMT
	I0127 12:36:08.306747    9948 round_trippers.go:580]     Audit-Id: 50962b7a-c0c0-43e1-a768-816272b98ac7
	I0127 12:36:08.306747    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:08.306820    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:08.307175    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:08.802483    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:08.802483    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:08.802483    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:08.802483    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:08.807264    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:08.807264    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:08.807338    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:08.807338    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:08 GMT
	I0127 12:36:08.807338    9948 round_trippers.go:580]     Audit-Id: 1076b5d8-85d8-4d1b-85b6-311915086cad
	I0127 12:36:08.807338    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:08.807338    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:08.807338    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:08.807736    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:09.301610    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:09.301610    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:09.301610    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:09.301610    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:09.306594    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:09.306716    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:09.306816    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:09.306816    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:09.306816    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:09 GMT
	I0127 12:36:09.306865    9948 round_trippers.go:580]     Audit-Id: dfe9c8e4-d7b3-482e-a39b-bf6a16659349
	I0127 12:36:09.306865    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:09.306865    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:09.307062    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:09.802056    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:09.802056    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:09.802056    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:09.802056    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:09.808430    9948 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:36:09.808430    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:09.808430    9948 round_trippers.go:580]     Audit-Id: 2a691a54-2fb2-4181-94e4-4d042a53e533
	I0127 12:36:09.808430    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:09.808430    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:09.808430    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:09.808430    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:09.808430    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:09 GMT
	I0127 12:36:09.808430    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:09.809156    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:36:10.301504    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:10.301504    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:10.301504    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:10.301504    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:10.306748    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:10.306748    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:10.306748    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:10.306748    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:10.306748    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:10.306748    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:10.306748    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:10 GMT
	I0127 12:36:10.306748    9948 round_trippers.go:580]     Audit-Id: bf9e319d-37f6-48e0-8e9a-a47bcd455abd
	I0127 12:36:10.306748    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:10.801560    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:10.801560    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:10.801560    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:10.801560    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:10.806525    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:10.806525    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:10.806525    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:10.806525    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:10.806525    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:10.806525    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:10.806525    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:10 GMT
	I0127 12:36:10.806525    9948 round_trippers.go:580]     Audit-Id: aef7fd32-1085-49e2-a197-c3be119a43e2
	I0127 12:36:10.806750    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:11.302078    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:11.302585    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:11.302585    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:11.302585    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:11.307435    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:11.307435    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:11.307604    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:11.307604    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:11 GMT
	I0127 12:36:11.307604    9948 round_trippers.go:580]     Audit-Id: 6687c558-d01a-428a-a22d-5dee880e730a
	I0127 12:36:11.307604    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:11.307604    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:11.307604    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:11.307877    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:11.801959    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:11.801959    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:11.801959    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:11.801959    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:11.807415    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:11.807473    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:11.807473    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:11.807473    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:11.807473    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:11.807473    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:11 GMT
	I0127 12:36:11.807473    9948 round_trippers.go:580]     Audit-Id: 8c919ebd-dc8d-42d4-b8fa-38bdf4307836
	I0127 12:36:11.807473    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:11.807791    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:12.301512    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:12.301512    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:12.301512    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:12.301512    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:12.305325    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:12.305325    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:12.305428    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:12.305447    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:12.305447    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:12.305447    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:12 GMT
	I0127 12:36:12.305447    9948 round_trippers.go:580]     Audit-Id: 64aa28be-8592-41ab-873c-a0ef7d93f091
	I0127 12:36:12.305447    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:12.305695    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:12.306271    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:36:12.801928    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:12.802508    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:12.802508    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:12.802508    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:12.806912    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:12.806912    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:12.806912    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:12.806912    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:12.806912    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:12.806912    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:12.806912    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:12 GMT
	I0127 12:36:12.806912    9948 round_trippers.go:580]     Audit-Id: 3b893e8d-b38c-4bd6-921a-03668ef2bd09
	I0127 12:36:12.807369    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:13.301830    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:13.301830    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:13.301830    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:13.301830    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:13.306935    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:13.307663    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:13.307663    9948 round_trippers.go:580]     Audit-Id: 047f9ed5-c7bc-4f3a-9dd6-2a1a588a002e
	I0127 12:36:13.307663    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:13.307663    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:13.307663    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:13.307663    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:13.307663    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:13 GMT
	I0127 12:36:13.307757    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:13.801761    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:13.801761    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:13.801761    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:13.801761    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:13.807176    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:13.807176    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:13.807394    9948 round_trippers.go:580]     Audit-Id: 25f6a7db-40aa-4d5f-981f-2e36e9132c78
	I0127 12:36:13.807394    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:13.807394    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:13.807394    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:13.807394    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:13.807394    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:13 GMT
	I0127 12:36:13.807394    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:14.302157    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:14.302157    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:14.302157    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:14.302157    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:14.307210    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:14.307210    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:14.307210    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:14.307210    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:14.307210    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:14.307210    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:14 GMT
	I0127 12:36:14.307210    9948 round_trippers.go:580]     Audit-Id: 1c3c4965-5dd5-4a9b-91f3-8bb34ba25b22
	I0127 12:36:14.307210    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:14.307982    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:14.309513    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:36:14.801629    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:14.801629    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:14.801629    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:14.801629    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:14.806808    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:14.806874    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:14.806874    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:14 GMT
	I0127 12:36:14.806874    9948 round_trippers.go:580]     Audit-Id: 533da9ef-0c00-4280-a65c-bbca8f1dabc8
	I0127 12:36:14.806963    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:14.807036    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:14.807036    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:14.807036    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:14.807374    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:15.302094    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:15.302094    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:15.302094    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:15.302094    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:15.307048    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:15.307048    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:15.307048    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:15.307048    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:15.307048    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:15.307048    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:15.307048    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:15 GMT
	I0127 12:36:15.307048    9948 round_trippers.go:580]     Audit-Id: b39dcd86-9358-4b13-9a4d-bed4ec175ab2
	I0127 12:36:15.307048    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:15.802663    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:15.802663    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:15.802663    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:15.802663    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:15.807456    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:15.807584    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:15.807584    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:15 GMT
	I0127 12:36:15.807584    9948 round_trippers.go:580]     Audit-Id: 33cb5268-1248-46c5-8e2c-ee2ac34f3f17
	I0127 12:36:15.807584    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:15.807584    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:15.807584    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:15.807584    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:15.807728    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:16.302685    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:16.302685    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:16.302685    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:16.302685    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:16.307209    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:16.307209    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:16.307209    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:16.307209    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:16.307300    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:16.307300    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:16 GMT
	I0127 12:36:16.307300    9948 round_trippers.go:580]     Audit-Id: 026bbfa3-af7c-42f1-809e-d3987da29eb4
	I0127 12:36:16.307300    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:16.307527    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:16.802860    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:16.802934    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:16.802934    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:16.802934    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:16.807148    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:16.807206    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:16.807206    9948 round_trippers.go:580]     Audit-Id: a4c6d309-0927-42c2-a583-ff2f1cde7443
	I0127 12:36:16.807206    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:16.807206    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:16.807206    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:16.807206    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:16.807206    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:16 GMT
	I0127 12:36:16.807739    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:16.808247    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:36:17.302962    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:17.302962    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:17.302962    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:17.302962    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:17.308068    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:17.308068    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:17.308209    9948 round_trippers.go:580]     Audit-Id: 7f38be74-d83e-4adf-81cd-24ccf7814720
	I0127 12:36:17.308209    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:17.308209    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:17.308209    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:17.308209    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:17.308209    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:17 GMT
	I0127 12:36:17.308497    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:17.801642    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:17.801642    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:17.801642    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:17.802109    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:17.805820    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:17.805888    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:17.805888    9948 round_trippers.go:580]     Audit-Id: 7bf90979-82e8-4e43-a5fd-63cbe0045643
	I0127 12:36:17.805966    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:17.805966    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:17.805966    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:17.805966    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:17.805966    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:17 GMT
	I0127 12:36:17.806350    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:18.302084    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:18.302084    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:18.302084    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:18.302084    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:18.305911    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:18.305977    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:18.305977    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:18 GMT
	I0127 12:36:18.305977    9948 round_trippers.go:580]     Audit-Id: 5946fc9b-60fe-4a7f-87cc-7376ff4ab8c3
	I0127 12:36:18.305977    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:18.305977    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:18.305977    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:18.306047    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:18.307350    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:18.801848    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:18.801848    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:18.801848    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:18.801848    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:18.805775    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:18.805775    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:18.805775    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:18.805775    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:18.805775    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:18.805775    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:18 GMT
	I0127 12:36:18.805775    9948 round_trippers.go:580]     Audit-Id: 8fc4aecf-8db5-4c36-92b1-76eb5497c630
	I0127 12:36:18.805775    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:18.806227    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:19.301603    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:19.301603    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:19.301603    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:19.301603    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:19.305828    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:19.305828    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:19.305889    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:19.305889    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:19.305889    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:19 GMT
	I0127 12:36:19.305889    9948 round_trippers.go:580]     Audit-Id: 28881e53-ad10-4ede-aae1-d4aa1d2448dd
	I0127 12:36:19.305889    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:19.305889    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:19.307137    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:19.307137    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:36:19.802099    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:19.802099    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:19.802099    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:19.802099    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:19.806326    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:19.806326    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:19.806392    9948 round_trippers.go:580]     Audit-Id: 3a515eb8-11fd-4385-9c2e-7093ce7a2a6e
	I0127 12:36:19.806392    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:19.806392    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:19.806392    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:19.806392    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:19.806392    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:19 GMT
	I0127 12:36:19.807014    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:20.301696    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:20.301696    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:20.301696    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:20.301696    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:20.306489    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:20.306925    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:20.306925    9948 round_trippers.go:580]     Audit-Id: 4d8f5af0-9f2b-43df-81a4-73e3ba345c7e
	I0127 12:36:20.306925    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:20.306925    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:20.306925    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:20.306925    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:20.306925    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:20 GMT
	I0127 12:36:20.307265    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:20.802233    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:20.802233    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:20.802233    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:20.802233    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:20.806437    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:20.806546    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:20.806546    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:20.806546    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:20.806546    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:20 GMT
	I0127 12:36:20.806546    9948 round_trippers.go:580]     Audit-Id: aa4fb124-4b0c-4e9f-84e0-d1b36701cb2a
	I0127 12:36:20.806546    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:20.806546    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:20.806821    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:21.301816    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:21.301816    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:21.301816    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:21.301816    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:21.306056    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:21.306056    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:21.306056    9948 round_trippers.go:580]     Audit-Id: 3662c43c-7b6c-428b-9e56-0d07e57147c4
	I0127 12:36:21.306056    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:21.306056    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:21.306056    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:21.306056    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:21.306056    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:21 GMT
	I0127 12:36:21.306394    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:21.801640    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:21.801640    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:21.801640    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:21.801640    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:21.805718    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:21.805789    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:21.805789    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:21.805789    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:21 GMT
	I0127 12:36:21.805789    9948 round_trippers.go:580]     Audit-Id: 97231ee4-b0d1-4c65-87ed-465f5bb47979
	I0127 12:36:21.805789    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:21.805856    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:21.805856    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:21.806193    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:21.806661    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:36:22.301665    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:22.301665    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:22.301665    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:22.301665    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:22.306745    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:22.306745    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:22.306745    9948 round_trippers.go:580]     Audit-Id: fb669eac-56b3-4e9b-afd7-4bddac9303b0
	I0127 12:36:22.306745    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:22.306745    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:22.306871    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:22.306871    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:22.306871    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:22 GMT
	I0127 12:36:22.307200    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:22.801710    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:22.802381    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:22.802381    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:22.802381    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:22.806038    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:22.806153    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:22.806153    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:22.806153    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:22.806153    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:22.806153    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:22.806153    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:22 GMT
	I0127 12:36:22.806153    9948 round_trippers.go:580]     Audit-Id: ae0c2a71-f5ad-4a6e-80d1-51ce243bfc64
	I0127 12:36:22.806492    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:23.302579    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:23.302665    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:23.302665    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:23.302665    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:23.307014    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:23.307099    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:23.307099    9948 round_trippers.go:580]     Audit-Id: 7a125feb-c987-4db6-90c5-c45a848e9cff
	I0127 12:36:23.307099    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:23.307099    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:23.307099    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:23.307099    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:23.307099    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:23 GMT
	I0127 12:36:23.307099    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:23.803716    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:23.803908    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:23.803908    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:23.804007    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:23.808266    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:23.808368    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:23.808368    9948 round_trippers.go:580]     Audit-Id: 7a18bfcc-33d2-42f2-a4e7-eb722491297e
	I0127 12:36:23.808436    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:23.808436    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:23.808436    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:23.808436    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:23.808436    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:23 GMT
	I0127 12:36:23.808530    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:23.809695    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:36:24.302094    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:24.302094    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:24.302094    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:24.302094    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:24.306229    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:24.306369    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:24.306369    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:24.306369    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:24.306369    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:24 GMT
	I0127 12:36:24.306369    9948 round_trippers.go:580]     Audit-Id: f98ff32a-ded4-49ba-beea-31d01e567f31
	I0127 12:36:24.306369    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:24.306369    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:24.306884    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:24.802032    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:24.802032    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:24.802032    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:24.802032    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:24.806857    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:24.806857    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:24.806857    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:24.806857    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:24.806857    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:24 GMT
	I0127 12:36:24.806857    9948 round_trippers.go:580]     Audit-Id: 7f1ccc0b-fa0b-48eb-8ed8-084905216477
	I0127 12:36:24.806857    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:24.806857    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:24.807228    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:25.301795    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:25.301795    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:25.301795    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:25.301795    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:25.307427    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:25.307427    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:25.307497    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:25.307497    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:25.307497    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:25.307497    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:25 GMT
	I0127 12:36:25.307497    9948 round_trippers.go:580]     Audit-Id: 7a9fc569-4008-44e7-bdcf-213be93d278f
	I0127 12:36:25.307497    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:25.307759    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:25.802906    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:25.802906    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:25.803084    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:25.803084    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:25.808103    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:25.808168    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:25.808168    9948 round_trippers.go:580]     Audit-Id: 775ab9ae-db4b-454f-a6f2-477b0d689244
	I0127 12:36:25.808168    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:25.808168    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:25.808168    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:25.808168    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:25.808168    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:25 GMT
	I0127 12:36:25.808326    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:26.301904    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:26.301904    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:26.301904    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:26.301904    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:26.306406    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:26.306457    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:26.306457    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:26.306457    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:26 GMT
	I0127 12:36:26.306495    9948 round_trippers.go:580]     Audit-Id: d728ea5a-a1eb-46e3-bb98-4fd7ba61b7d1
	I0127 12:36:26.306495    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:26.306495    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:26.306495    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:26.306694    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:26.307678    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:36:26.802281    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:26.802281    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:26.802281    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:26.802281    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:26.805887    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:26.806871    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:26.806871    9948 round_trippers.go:580]     Audit-Id: 86fc58ae-3e2a-4d3e-845b-6f251be9180f
	I0127 12:36:26.806871    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:26.806871    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:26.806871    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:26.806871    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:26.806871    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:26 GMT
	I0127 12:36:26.807272    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:27.302565    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:27.302565    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:27.302565    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:27.302565    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:27.307037    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:27.307037    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:27.307158    9948 round_trippers.go:580]     Audit-Id: bf219283-a7b0-46fb-be16-4b193abe4ae5
	I0127 12:36:27.307158    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:27.307158    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:27.307158    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:27.307158    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:27.307158    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:27 GMT
	I0127 12:36:27.307324    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:27.802414    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:27.802534    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:27.802534    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:27.802534    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:27.805663    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:27.805663    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:27.805663    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:27.805663    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:27.805663    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:27.805663    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:27 GMT
	I0127 12:36:27.805663    9948 round_trippers.go:580]     Audit-Id: f54a1816-a932-4619-8752-0a528c064fa0
	I0127 12:36:27.805663    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:27.806103    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:28.301778    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:28.301778    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:28.301778    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:28.301778    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:28.305749    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:28.305749    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:28.305749    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:28.305749    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:28.305749    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:28.305749    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:28.305749    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:28 GMT
	I0127 12:36:28.305749    9948 round_trippers.go:580]     Audit-Id: 3aefacb0-46be-451d-9889-f58dbbb5649c
	I0127 12:36:28.306304    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:28.802935    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:28.803012    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:28.803012    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:28.803012    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:28.807911    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:28.807979    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:28.807979    9948 round_trippers.go:580]     Audit-Id: 126ab12d-5d06-4665-ab6d-d759801f2588
	I0127 12:36:28.807979    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:28.807979    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:28.807979    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:28.807979    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:28.807979    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:28 GMT
	I0127 12:36:28.809462    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:28.810230    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:36:29.302504    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:29.302808    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:29.302808    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:29.302885    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:29.306698    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:29.306698    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:29.306698    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:29.306698    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:29.306698    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:29.306698    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:29.306698    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:29 GMT
	I0127 12:36:29.306698    9948 round_trippers.go:580]     Audit-Id: cf9acb4c-ff42-4acf-9d6e-8aa371733611
	I0127 12:36:29.306698    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:29.802983    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:29.803054    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:29.803054    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:29.803054    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:29.806805    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:29.806805    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:29.806872    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:29 GMT
	I0127 12:36:29.806872    9948 round_trippers.go:580]     Audit-Id: 95247832-a3d7-4b82-a006-f1613cd7d2f9
	I0127 12:36:29.806872    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:29.806872    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:29.806872    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:29.806872    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:29.807346    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:30.302638    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:30.302638    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:30.302638    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:30.302638    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:30.307303    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:30.307303    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:30.307303    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:30.307303    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:30.307303    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:30.307303    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:30 GMT
	I0127 12:36:30.307303    9948 round_trippers.go:580]     Audit-Id: df371b34-e74a-490f-8ba3-fa30d6ec44c7
	I0127 12:36:30.307303    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:30.307546    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:30.802554    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:30.802554    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:30.802554    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:30.802554    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:30.807573    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:30.807573    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:30.807573    9948 round_trippers.go:580]     Audit-Id: a1e16273-558f-40ff-b196-346cf0d2aafc
	I0127 12:36:30.807573    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:30.807573    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:30.807573    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:30.807573    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:30.807573    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:30 GMT
	I0127 12:36:30.807573    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:31.302489    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:31.302489    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:31.302489    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:31.302489    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:31.307396    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:31.307396    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:31.307396    9948 round_trippers.go:580]     Audit-Id: 81033689-77d8-4e33-a66a-5f5a1e0438dd
	I0127 12:36:31.307396    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:31.307396    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:31.307396    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:31.307632    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:31.307632    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:31 GMT
	I0127 12:36:31.307786    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:31.308381    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:36:31.801956    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:31.801956    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:31.801956    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:31.801956    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:31.807383    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:31.807383    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:31.807383    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:31.807383    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:31.807383    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:31 GMT
	I0127 12:36:31.807383    9948 round_trippers.go:580]     Audit-Id: 21c0ccaa-4540-48a3-8be8-838ebeee9c2d
	I0127 12:36:31.807383    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:31.807484    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:31.807845    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:32.302894    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:32.302894    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:32.302894    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:32.302894    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:32.307832    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:32.307899    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:32.307899    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:32.307899    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:32.307976    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:32.307976    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:32.307976    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:32 GMT
	I0127 12:36:32.307976    9948 round_trippers.go:580]     Audit-Id: 0f668223-a2c2-43e4-99f5-0513fec4861f
	I0127 12:36:32.308715    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:32.802466    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:32.802466    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:32.802466    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:32.802466    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:32.805440    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:32.805542    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:32.805542    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:32.805542    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:32.805542    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:32 GMT
	I0127 12:36:32.805542    9948 round_trippers.go:580]     Audit-Id: 4d21f474-13f9-4d03-8c4f-788d85208ace
	I0127 12:36:32.805542    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:32.805542    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:32.806058    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:33.302377    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:33.302377    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:33.302377    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:33.302377    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:33.307502    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:33.307593    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:33.307593    9948 round_trippers.go:580]     Audit-Id: 0d5a2b96-20ee-43d1-94e4-6caac8f3a1bb
	I0127 12:36:33.307593    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:33.307593    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:33.307593    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:33.307593    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:33.307593    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:33 GMT
	I0127 12:36:33.307992    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1987","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0127 12:36:33.308641    9948 node_ready.go:49] node "multinode-659000" has status "Ready":"True"
	I0127 12:36:33.308709    9948 node_ready.go:38] duration metric: took 46.0070183s for node "multinode-659000" to be "Ready" ...
	I0127 12:36:33.308793    9948 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:36:33.308897    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods
	I0127 12:36:33.308897    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:33.308897    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:33.308897    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:33.313244    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:33.313856    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:33.313856    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:33.313856    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:33.313924    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:33 GMT
	I0127 12:36:33.313924    9948 round_trippers.go:580]     Audit-Id: 0730b251-13a7-4fd7-9649-390e753b15c3
	I0127 12:36:33.313924    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:33.313924    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:33.315523    9948 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1988"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89983 chars]
	I0127 12:36:33.320195    9948 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:33.320195    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:33.320195    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:33.320195    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:33.320195    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:33.322953    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:33.322953    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:33.322953    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:33.322953    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:33.322953    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:33 GMT
	I0127 12:36:33.322953    9948 round_trippers.go:580]     Audit-Id: 2a2a77f8-3199-4e12-b2aa-dec11e378238
	I0127 12:36:33.322953    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:33.323906    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:33.323970    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:33.324546    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:33.324546    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:33.324546    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:33.324744    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:33.327635    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:33.327719    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:33.327719    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:33.327719    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:33.327719    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:33.327719    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:33.327719    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:33 GMT
	I0127 12:36:33.327719    9948 round_trippers.go:580]     Audit-Id: 569e0565-32ac-4968-8993-e251035f54f1
	I0127 12:36:33.327719    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1987","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0127 12:36:33.820814    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:33.820976    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:33.820976    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:33.820976    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:33.826489    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:33.826489    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:33.826489    9948 round_trippers.go:580]     Audit-Id: 77dc52d9-d416-4b93-8086-4cc47fae25db
	I0127 12:36:33.826489    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:33.826489    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:33.826489    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:33.826489    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:33.826489    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:33 GMT
	I0127 12:36:33.827198    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:33.828133    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:33.828133    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:33.828133    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:33.828133    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:33.831399    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:33.831476    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:33.831476    9948 round_trippers.go:580]     Audit-Id: 17e23220-0633-401a-b8ee-a2212ec49798
	I0127 12:36:33.831476    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:33.831476    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:33.831476    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:33.831476    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:33.831476    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:33 GMT
	I0127 12:36:33.831911    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1987","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0127 12:36:34.321352    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:34.321352    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:34.321352    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:34.321352    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:34.326089    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:34.326089    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:34.326089    9948 round_trippers.go:580]     Audit-Id: 62af299b-9422-4a85-9f23-6896769f4a83
	I0127 12:36:34.326089    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:34.326089    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:34.326089    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:34.326089    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:34.326089    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:34 GMT
	I0127 12:36:34.326089    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:34.327429    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:34.327474    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:34.327474    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:34.327520    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:34.329972    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:34.329972    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:34.329972    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:34.329972    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:34.329972    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:34 GMT
	I0127 12:36:34.329972    9948 round_trippers.go:580]     Audit-Id: 391a1c3b-fd97-4a2f-98bd-a95ea28fc080
	I0127 12:36:34.329972    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:34.329972    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:34.330362    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1987","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0127 12:36:34.822026    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:34.822026    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:34.822108    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:34.822108    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:34.827094    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:34.827094    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:34.827094    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:34.827094    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:34.827094    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:34.827094    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:34.827094    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:34 GMT
	I0127 12:36:34.827094    9948 round_trippers.go:580]     Audit-Id: e912fe5d-c2c1-4701-9740-bac6cf17ac06
	I0127 12:36:34.827331    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:34.828034    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:34.828034    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:34.828034    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:34.828148    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:34.831865    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:34.831865    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:34.831865    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:34.831865    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:34.831865    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:34.831865    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:34 GMT
	I0127 12:36:34.831865    9948 round_trippers.go:580]     Audit-Id: a5248d15-b0e5-450b-8d82-d751d51bf412
	I0127 12:36:34.831865    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:34.831865    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1987","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0127 12:36:35.320703    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:35.320703    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:35.320703    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:35.320703    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:35.324643    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:35.324712    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:35.324712    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:35.324712    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:35.324712    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:35.324712    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:35.324712    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:35 GMT
	I0127 12:36:35.324712    9948 round_trippers.go:580]     Audit-Id: ad680456-dba7-4567-8a65-c1931a0ffa52
	I0127 12:36:35.324943    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:35.325795    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:35.325795    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:35.325795    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:35.325795    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:35.328791    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:35.328791    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:35.328791    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:35.328791    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:35.328791    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:35.328791    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:35.328791    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:35 GMT
	I0127 12:36:35.328791    9948 round_trippers.go:580]     Audit-Id: 90bf4e39-bfa2-4e12-817e-7f9382329bcc
	I0127 12:36:35.329723    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:35.330210    9948 pod_ready.go:103] pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:35.820464    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:35.820464    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:35.820464    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:35.820464    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:35.825481    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:35.825481    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:35.825481    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:35.825481    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:35 GMT
	I0127 12:36:35.825481    9948 round_trippers.go:580]     Audit-Id: 6eca0920-d2d5-41bd-9284-cb068ef4926b
	I0127 12:36:35.825481    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:35.825481    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:35.825481    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:35.825481    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:35.826780    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:35.826780    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:35.826780    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:35.826895    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:35.829095    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:35.829778    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:35.829778    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:35.829778    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:35.829778    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:35 GMT
	I0127 12:36:35.829778    9948 round_trippers.go:580]     Audit-Id: 1e6da329-2013-48ae-80ed-2544432dc75f
	I0127 12:36:35.829778    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:35.829778    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:35.830341    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:36.320699    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:36.320699    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:36.320699    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:36.320699    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:36.324439    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:36.324504    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:36.324504    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:36.324504    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:36.324504    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:36.324504    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:36.324504    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:36 GMT
	I0127 12:36:36.324504    9948 round_trippers.go:580]     Audit-Id: f6703be8-18fb-4eec-a00c-a258b1deff1e
	I0127 12:36:36.324754    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:36.325224    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:36.325224    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:36.325224    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:36.325224    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:36.328501    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:36.328533    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:36.328577    9948 round_trippers.go:580]     Audit-Id: 4642b17f-d803-49ff-b56e-45a5abcd4d44
	I0127 12:36:36.328577    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:36.328577    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:36.328577    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:36.328577    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:36.328605    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:36 GMT
	I0127 12:36:36.328935    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:36.821479    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:36.821479    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:36.821479    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:36.821479    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:36.826092    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:36.826092    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:36.826092    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:36 GMT
	I0127 12:36:36.826092    9948 round_trippers.go:580]     Audit-Id: 7fd63d0e-6a21-4817-aa5a-b508421b7477
	I0127 12:36:36.826092    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:36.826092    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:36.826092    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:36.826092    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:36.826328    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:36.826600    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:36.826600    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:36.826600    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:36.826600    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:36.829504    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:36.829796    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:36.829796    9948 round_trippers.go:580]     Audit-Id: 1f89c64e-cfe2-498c-a969-0662949d923d
	I0127 12:36:36.829796    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:36.829796    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:36.829796    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:36.829796    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:36.829796    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:36 GMT
	I0127 12:36:36.830000    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:37.320933    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:37.320933    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:37.320933    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:37.320933    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:37.326193    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:37.326263    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:37.326263    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:37 GMT
	I0127 12:36:37.326263    9948 round_trippers.go:580]     Audit-Id: 9ff4cccb-53dd-4e67-a13e-ffec69ad3ea5
	I0127 12:36:37.326263    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:37.326263    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:37.326263    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:37.326317    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:37.326347    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:37.327309    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:37.327401    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:37.327401    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:37.327401    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:37.330251    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:37.330251    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:37.330251    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:37 GMT
	I0127 12:36:37.330251    9948 round_trippers.go:580]     Audit-Id: 4dd66b4a-6caf-4906-b952-c19c0ebb7d5e
	I0127 12:36:37.330251    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:37.330251    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:37.330251    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:37.330251    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:37.330251    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:37.331669    9948 pod_ready.go:103] pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:37.820544    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:37.820544    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:37.820544    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:37.820544    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:37.825114    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:37.825114    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:37.825114    9948 round_trippers.go:580]     Audit-Id: c458a91e-2744-4255-ad1c-e8f374539e14
	I0127 12:36:37.825114    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:37.825114    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:37.825114    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:37.825114    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:37.825114    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:37 GMT
	I0127 12:36:37.825114    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:37.826218    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:37.826299    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:37.826299    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:37.826299    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:37.829705    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:37.829705    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:37.829705    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:37.829705    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:37 GMT
	I0127 12:36:37.829705    9948 round_trippers.go:580]     Audit-Id: f80ff682-422e-4c89-abd7-2dc22f8a0f47
	I0127 12:36:37.829705    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:37.829705    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:37.829705    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:37.830309    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:38.320686    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:38.320686    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:38.320686    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:38.320686    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:38.326732    9948 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:36:38.326732    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:38.326732    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:38.326732    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:38 GMT
	I0127 12:36:38.326732    9948 round_trippers.go:580]     Audit-Id: 2c09b2e5-209b-41b9-99ee-27d5973e52b5
	I0127 12:36:38.326732    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:38.326732    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:38.326732    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:38.327562    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:38.328413    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:38.328413    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:38.328413    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:38.328413    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:38.331012    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:38.331012    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:38.331012    9948 round_trippers.go:580]     Audit-Id: 90b87926-098e-4f69-a18e-46d806a32bc9
	I0127 12:36:38.331012    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:38.331012    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:38.331012    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:38.331012    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:38.331012    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:38 GMT
	I0127 12:36:38.331012    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:38.821070    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:38.821070    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:38.821070    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:38.821147    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:38.825917    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:38.826036    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:38.826066    9948 round_trippers.go:580]     Audit-Id: 299a4e77-cc72-451a-83e9-006b80ea8b41
	I0127 12:36:38.826066    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:38.826066    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:38.826066    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:38.826142    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:38.826173    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:38 GMT
	I0127 12:36:38.826317    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:38.827062    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:38.827259    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:38.827259    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:38.827259    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:38.829605    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:38.830260    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:38.830260    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:38.830260    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:38.830260    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:38.830260    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:38.830260    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:38 GMT
	I0127 12:36:38.830260    9948 round_trippers.go:580]     Audit-Id: 7e4a598b-43fc-4780-acff-1857a86a40cd
	I0127 12:36:38.830649    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:39.321081    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:39.321081    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:39.321081    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:39.321081    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:39.326041    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:39.326109    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:39.326183    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:39.326183    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:39.326183    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:39.326183    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:39 GMT
	I0127 12:36:39.326183    9948 round_trippers.go:580]     Audit-Id: 4c36b3e8-84b5-4e58-bc8d-091181e93fd6
	I0127 12:36:39.326183    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:39.326478    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:39.327061    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:39.327061    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:39.327061    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:39.327061    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:39.330640    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:39.330894    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:39.330894    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:39.330894    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:39.330894    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:39 GMT
	I0127 12:36:39.330894    9948 round_trippers.go:580]     Audit-Id: 09f4cb5d-f762-492f-8fd2-db25aa633485
	I0127 12:36:39.330894    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:39.330894    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:39.331275    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:39.331774    9948 pod_ready.go:103] pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:39.821389    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:39.821389    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:39.821389    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:39.821389    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:39.825836    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:39.825836    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:39.825836    9948 round_trippers.go:580]     Audit-Id: 96d50f92-e468-4df2-a42e-e12e9a7e7ffd
	I0127 12:36:39.825836    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:39.825836    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:39.825836    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:39.825836    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:39.825836    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:39 GMT
	I0127 12:36:39.825836    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:39.826916    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:39.826916    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:39.826999    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:39.826999    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:39.829988    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:39.829988    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:39.829988    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:39.829988    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:39.829988    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:39.830106    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:39 GMT
	I0127 12:36:39.830106    9948 round_trippers.go:580]     Audit-Id: 94ec9d44-3c5f-425f-ae27-f6e93ff23189
	I0127 12:36:39.830106    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:39.830397    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:40.321366    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:40.321366    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:40.321366    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:40.321366    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:40.326284    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:40.326284    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:40.326284    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:40 GMT
	I0127 12:36:40.326284    9948 round_trippers.go:580]     Audit-Id: 6619e06d-2270-494b-a3bf-75378b31fa38
	I0127 12:36:40.326284    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:40.326284    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:40.326284    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:40.326284    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:40.326907    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:40.327931    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:40.327931    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:40.328024    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:40.328024    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:40.330814    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:40.331793    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:40.331793    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:40.331793    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:40.331793    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:40.331793    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:40.331793    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:40 GMT
	I0127 12:36:40.331793    9948 round_trippers.go:580]     Audit-Id: 0ec7db17-f855-4e00-815c-08829cd9975f
	I0127 12:36:40.332042    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:40.820861    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:40.820861    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:40.820861    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:40.820861    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:40.824107    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:40.824107    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:40.824107    9948 round_trippers.go:580]     Audit-Id: 5d1bf74e-6e66-40f9-9b35-7673a4dea054
	I0127 12:36:40.824107    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:40.824107    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:40.824107    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:40.824107    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:40.824107    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:40 GMT
	I0127 12:36:40.824107    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:40.825263    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:40.825263    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:40.825263    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:40.825263    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:40.827900    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:40.827986    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:40.827986    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:40.827986    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:40.827986    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:40.827986    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:40 GMT
	I0127 12:36:40.827986    9948 round_trippers.go:580]     Audit-Id: d1e6c02c-8c4b-437d-9853-871133e118cc
	I0127 12:36:40.828069    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:40.828556    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:41.320979    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:41.320979    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:41.320979    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:41.320979    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:41.324611    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:41.324611    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:41.324611    9948 round_trippers.go:580]     Audit-Id: c3a64e1c-1ba6-4071-906e-0f94efdc34c9
	I0127 12:36:41.324611    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:41.324611    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:41.324611    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:41.324611    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:41.324611    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:41 GMT
	I0127 12:36:41.325328    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:41.326018    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:41.326075    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:41.326075    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:41.326075    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:41.328384    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:41.328384    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:41.328452    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:41.328452    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:41.328452    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:41.328452    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:41 GMT
	I0127 12:36:41.328452    9948 round_trippers.go:580]     Audit-Id: eb33792e-b287-47e5-85df-360fd77dbb66
	I0127 12:36:41.328452    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:41.328795    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:41.821678    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:41.821678    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:41.821678    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:41.821678    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:41.825439    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:41.825531    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:41.825531    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:41.825531    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:41.825609    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:41.825609    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:41 GMT
	I0127 12:36:41.825609    9948 round_trippers.go:580]     Audit-Id: 35bc2cd2-c8a6-4622-aee6-25efa69650d4
	I0127 12:36:41.825609    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:41.825763    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:41.826356    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:41.826356    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:41.826356    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:41.826356    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:41.828944    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:41.828944    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:41.828944    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:41 GMT
	I0127 12:36:41.828944    9948 round_trippers.go:580]     Audit-Id: 8942abae-06d9-445c-9ad6-6bc988527c6c
	I0127 12:36:41.828944    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:41.828944    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:41.828944    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:41.828944    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:41.831215    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:41.831215    9948 pod_ready.go:103] pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:42.321254    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:42.321254    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:42.321254    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:42.321254    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:42.326755    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:42.326812    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:42.326812    9948 round_trippers.go:580]     Audit-Id: 5c4fedb5-53d9-4fe2-8d0b-8480313db713
	I0127 12:36:42.326812    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:42.326812    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:42.326863    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:42.326863    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:42.326863    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:42 GMT
	I0127 12:36:42.327080    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:42.328025    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:42.328054    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:42.328054    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:42.328110    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:42.329940    9948 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0127 12:36:42.331030    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:42.331058    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:42.331058    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:42.331058    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:42 GMT
	I0127 12:36:42.331058    9948 round_trippers.go:580]     Audit-Id: a985d4d4-cc11-41d8-9b1e-8d8326154bf2
	I0127 12:36:42.331058    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:42.331058    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:42.331459    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:42.821058    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:42.821554    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:42.821554    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:42.821554    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:42.825087    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:42.825087    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:42.825087    9948 round_trippers.go:580]     Audit-Id: c868490d-2c4f-4a10-b693-f51d31e7322b
	I0127 12:36:42.825087    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:42.826094    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:42.826094    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:42.826094    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:42.826094    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:42 GMT
	I0127 12:36:42.826320    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:42.827999    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:42.827999    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:42.827999    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:42.827999    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:42.837542    9948 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0127 12:36:42.838395    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:42.838395    9948 round_trippers.go:580]     Audit-Id: 87a72fea-1b12-4eb8-a62f-0400451dab7d
	I0127 12:36:42.838395    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:42.838395    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:42.838395    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:42.838395    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:42.838395    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:42 GMT
	I0127 12:36:42.838800    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:43.320919    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:43.320992    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:43.321063    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:43.321063    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:43.324954    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:43.325088    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:43.325088    9948 round_trippers.go:580]     Audit-Id: 54edfd64-ac5d-47fe-9191-dc131ebcc440
	I0127 12:36:43.325088    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:43.325088    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:43.325088    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:43.325088    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:43.325137    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:43 GMT
	I0127 12:36:43.325241    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:43.326032    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:43.326085    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:43.326085    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:43.326085    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:43.329040    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:43.329040    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:43.329040    9948 round_trippers.go:580]     Audit-Id: b9a89f32-83aa-4d39-87d0-70654bfe1e2e
	I0127 12:36:43.329040    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:43.329040    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:43.329040    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:43.329040    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:43.329040    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:43 GMT
	I0127 12:36:43.329594    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:43.820553    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:43.820553    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:43.820553    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:43.820553    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:43.825900    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:43.825900    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:43.825900    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:43.826088    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:43.826088    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:43.826088    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:43 GMT
	I0127 12:36:43.826088    9948 round_trippers.go:580]     Audit-Id: c26c8ce4-3843-44e5-9264-5e1e574bbd8f
	I0127 12:36:43.826088    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:43.826280    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:43.827541    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:43.827541    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:43.827541    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:43.827541    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:43.831828    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:43.831828    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:43.831828    9948 round_trippers.go:580]     Audit-Id: 91dfcc2a-cbdf-4951-801f-a5427f673887
	I0127 12:36:43.831828    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:43.831828    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:43.831828    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:43.831828    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:43.831828    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:43 GMT
	I0127 12:36:43.831828    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:43.832707    9948 pod_ready.go:103] pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:44.320690    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:44.320690    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:44.320690    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:44.320690    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:44.323966    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:44.323966    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:44.323966    9948 round_trippers.go:580]     Audit-Id: 6cc2ce16-ca4e-4a07-95da-141006da92b2
	I0127 12:36:44.324069    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:44.324069    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:44.324069    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:44.324069    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:44.324069    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:44 GMT
	I0127 12:36:44.324166    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:44.324885    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:44.324885    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:44.324885    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:44.324885    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:44.328506    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:44.328506    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:44.328595    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:44.328595    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:44 GMT
	I0127 12:36:44.328595    9948 round_trippers.go:580]     Audit-Id: d4ba006e-97f6-427c-91d1-4425543d2724
	I0127 12:36:44.328595    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:44.328595    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:44.328595    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:44.328868    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:44.820894    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:44.821461    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:44.821461    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:44.821461    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:44.825409    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:44.825409    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:44.825409    9948 round_trippers.go:580]     Audit-Id: 12cb0afd-4713-40d7-ba85-d35466c0e5c5
	I0127 12:36:44.825409    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:44.825409    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:44.825409    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:44.825409    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:44.825409    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:44 GMT
	I0127 12:36:44.825649    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:44.826364    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:44.826467    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:44.826467    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:44.826467    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:44.828367    9948 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0127 12:36:44.829246    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:44.829246    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:44.829246    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:44.829246    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:44 GMT
	I0127 12:36:44.829246    9948 round_trippers.go:580]     Audit-Id: 8881fe61-2c50-4442-bdff-ee08c1492cfd
	I0127 12:36:44.829246    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:44.829246    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:44.829562    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:45.321583    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:45.321583    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:45.321583    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:45.321583    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:45.325435    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:45.325497    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:45.325497    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:45.325497    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:45.325497    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:45 GMT
	I0127 12:36:45.325497    9948 round_trippers.go:580]     Audit-Id: 15bd9c8b-72ad-4d79-82a2-43838af25a23
	I0127 12:36:45.325497    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:45.325497    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:45.325717    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:45.326691    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:45.326766    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:45.326766    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:45.326766    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:45.329998    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:45.330034    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:45.330089    9948 round_trippers.go:580]     Audit-Id: 50ad3c45-2ddd-4675-b901-57772ffb59c7
	I0127 12:36:45.330089    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:45.330089    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:45.330089    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:45.330089    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:45.330089    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:45 GMT
	I0127 12:36:45.330250    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:45.820465    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:45.820465    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:45.820465    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:45.820465    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:45.825690    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:45.825805    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:45.825805    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:45.825805    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:45.825805    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:45 GMT
	I0127 12:36:45.825805    9948 round_trippers.go:580]     Audit-Id: c74d55b4-0078-4574-bf09-8b00be6fac2b
	I0127 12:36:45.825805    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:45.825805    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:45.826175    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:45.826982    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:45.826982    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:45.826982    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:45.826982    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:45.833217    9948 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:36:45.833298    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:45.833298    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:45.833298    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:45 GMT
	I0127 12:36:45.833298    9948 round_trippers.go:580]     Audit-Id: d5d93877-e067-4d8c-8e62-cd8b20d3e3bf
	I0127 12:36:45.833298    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:45.833298    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:45.833298    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:45.834363    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:45.835468    9948 pod_ready.go:103] pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:46.321354    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:46.321354    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:46.321354    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:46.321354    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:46.324390    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:46.324390    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:46.324390    9948 round_trippers.go:580]     Audit-Id: d8ce2e3b-cfe5-4d49-a4eb-1a03a2828629
	I0127 12:36:46.324390    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:46.324390    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:46.324390    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:46.324390    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:46.324390    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:46 GMT
	I0127 12:36:46.324390    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:46.324390    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:46.324390    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:46.324390    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:46.324390    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:46.331225    9948 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:36:46.331225    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:46.331225    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:46.331225    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:46.331225    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:46 GMT
	I0127 12:36:46.331225    9948 round_trippers.go:580]     Audit-Id: 411188c6-0279-40df-978f-7d9770829b9f
	I0127 12:36:46.331225    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:46.331225    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:46.331651    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:46.821298    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:46.821298    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:46.821298    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:46.821298    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:46.825935    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:46.826017    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:46.826017    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:46.826017    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:46.826017    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:46 GMT
	I0127 12:36:46.826017    9948 round_trippers.go:580]     Audit-Id: 04df8bcb-1dcc-46db-a73f-48b4c3d191d6
	I0127 12:36:46.826124    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:46.826124    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:46.826354    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:46.826530    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:46.826530    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:46.827101    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:46.827101    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:46.836171    9948 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0127 12:36:46.836171    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:46.836171    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:46.836171    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:46 GMT
	I0127 12:36:46.836171    9948 round_trippers.go:580]     Audit-Id: 57c979ff-a6b0-44f7-8b81-9d083bd9c742
	I0127 12:36:46.836171    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:46.836171    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:46.836171    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:46.836171    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:47.321368    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:47.321368    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:47.321368    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:47.321368    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:47.326355    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:47.326422    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:47.326422    9948 round_trippers.go:580]     Audit-Id: adb60445-50b1-4c22-b003-d57848839eaf
	I0127 12:36:47.326422    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:47.326422    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:47.326422    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:47.326422    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:47.326499    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:47 GMT
	I0127 12:36:47.326808    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:47.327737    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:47.327737    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:47.327737    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:47.327737    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:47.331334    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:47.331334    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:47.331334    9948 round_trippers.go:580]     Audit-Id: f91dfa9b-ddec-459b-bf52-f4584a60279d
	I0127 12:36:47.331419    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:47.331419    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:47.331419    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:47.331419    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:47.331419    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:47 GMT
	I0127 12:36:47.331615    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:47.821043    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:47.821043    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:47.821043    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:47.821043    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:47.828936    9948 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 12:36:47.828936    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:47.828936    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:47.828936    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:47 GMT
	I0127 12:36:47.828936    9948 round_trippers.go:580]     Audit-Id: d0f8558a-8914-401e-a890-28f3f3846e20
	I0127 12:36:47.828936    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:47.828936    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:47.828936    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:47.828936    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"2021","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7275 chars]
	I0127 12:36:47.830137    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:47.830208    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:47.830208    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:47.830208    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:47.830469    9948 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0127 12:36:47.830469    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:47.830469    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:47.830469    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:47.830469    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:47 GMT
	I0127 12:36:47.830469    9948 round_trippers.go:580]     Audit-Id: 2c93b556-f219-4bc3-bcb2-534a9256833e
	I0127 12:36:47.830469    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:47.830469    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:47.835693    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:47.835693    9948 pod_ready.go:103] pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:48.320957    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:48.320957    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:48.320957    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:48.320957    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:48.325256    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:48.325337    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:48.325337    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:48.325337    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:48 GMT
	I0127 12:36:48.325337    9948 round_trippers.go:580]     Audit-Id: 59cd427f-2dda-4af3-b85d-8a9951703b09
	I0127 12:36:48.325337    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:48.325337    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:48.325337    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:48.326379    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"2021","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7275 chars]
	I0127 12:36:48.327250    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:48.327289    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:48.327351    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:48.327351    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:48.330734    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:48.330791    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:48.330791    9948 round_trippers.go:580]     Audit-Id: addd9b7d-8852-42f4-bd35-b5d52ca2b2ec
	I0127 12:36:48.330791    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:48.330791    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:48.330791    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:48.330791    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:48.330791    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:48 GMT
	I0127 12:36:48.330791    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:48.820925    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:48.820925    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:48.820925    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:48.820925    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:48.826876    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:48.826876    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:48.826876    9948 round_trippers.go:580]     Audit-Id: 5fa8e9ff-a6a3-429c-8e2a-72e15c9f7add
	I0127 12:36:48.826876    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:48.826876    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:48.826876    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:48.826876    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:48.826876    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:48 GMT
	I0127 12:36:48.827727    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"2024","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7046 chars]
	I0127 12:36:48.828532    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:48.828705    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:48.828705    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:48.828705    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:48.835065    9948 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:36:48.835065    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:48.835065    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:48.835065    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:48 GMT
	I0127 12:36:48.835622    9948 round_trippers.go:580]     Audit-Id: 6acc7d47-78ce-490c-978c-9a4f4e210905
	I0127 12:36:48.835622    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:48.835622    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:48.835622    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:48.835686    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:48.836289    9948 pod_ready.go:93] pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:48.836289    9948 pod_ready.go:82] duration metric: took 15.5159311s for pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:48.836289    9948 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:48.836289    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-659000
	I0127 12:36:48.836289    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:48.836289    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:48.836289    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:48.839500    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:48.839500    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:48.839588    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:48.839588    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:48 GMT
	I0127 12:36:48.839588    9948 round_trippers.go:580]     Audit-Id: e21fa624-ca15-457a-87ec-77af1716c28f
	I0127 12:36:48.839588    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:48.839588    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:48.839588    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:48.840031    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-659000","namespace":"kube-system","uid":"4c33fa42-51a7-4a7a-a497-cce80b8773d6","resourceVersion":"1939","creationTimestamp":"2025-01-27T12:35:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.29.198.106:2379","kubernetes.io/config.hash":"575cefa3aa8017dce576fa244e719a4e","kubernetes.io/config.mirror":"575cefa3aa8017dce576fa244e719a4e","kubernetes.io/config.seen":"2025-01-27T12:35:36.285837685Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:35:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6617 chars]
	I0127 12:36:48.840454    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:48.840454    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:48.840454    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:48.840454    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:48.842619    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:48.843248    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:48.843248    9948 round_trippers.go:580]     Audit-Id: 47a86454-86a5-4234-8b43-573632b52286
	I0127 12:36:48.843248    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:48.843248    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:48.843248    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:48.843318    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:48.843318    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:48 GMT
	I0127 12:36:48.843557    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:48.844012    9948 pod_ready.go:93] pod "etcd-multinode-659000" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:48.844012    9948 pod_ready.go:82] duration metric: took 7.7226ms for pod "etcd-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:48.844088    9948 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:48.844196    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-659000
	I0127 12:36:48.844196    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:48.844196    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:48.844196    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:48.846871    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:48.846871    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:48.846871    9948 round_trippers.go:580]     Audit-Id: 26221292-c660-4a10-ab6a-632192a23b5a
	I0127 12:36:48.846871    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:48.846871    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:48.846871    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:48.846871    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:48.846871    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:48 GMT
	I0127 12:36:48.846871    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-659000","namespace":"kube-system","uid":"8fbee94f-fd8f-4431-bd9f-b75d49cb19d4","resourceVersion":"1937","creationTimestamp":"2025-01-27T12:35:42Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.29.198.106:8443","kubernetes.io/config.hash":"b9fbd177058ba298cde2a92c4ef5c601","kubernetes.io/config.mirror":"b9fbd177058ba298cde2a92c4ef5c601","kubernetes.io/config.seen":"2025-01-27T12:35:36.265565317Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:35:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8049 chars]
	I0127 12:36:48.847970    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:48.847970    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:48.847970    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:48.848039    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:48.850747    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:48.851008    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:48.851008    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:48 GMT
	I0127 12:36:48.851008    9948 round_trippers.go:580]     Audit-Id: f08e9b74-0454-4e56-b61c-b25ad72ecf29
	I0127 12:36:48.851008    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:48.851008    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:48.851008    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:48.851091    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:48.851793    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:48.852395    9948 pod_ready.go:93] pod "kube-apiserver-multinode-659000" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:48.852395    9948 pod_ready.go:82] duration metric: took 8.3073ms for pod "kube-apiserver-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:48.852486    9948 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:48.852559    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-659000
	I0127 12:36:48.852632    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:48.852632    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:48.852687    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:48.854785    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:48.855165    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:48.855165    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:48.855165    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:48.855165    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:48 GMT
	I0127 12:36:48.855165    9948 round_trippers.go:580]     Audit-Id: 47f73fc3-4509-4187-b156-bbc3ae52477b
	I0127 12:36:48.855165    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:48.855165    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:48.855438    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-659000","namespace":"kube-system","uid":"8be02f36-161c-44f3-b526-56db3b8a007a","resourceVersion":"1923","creationTimestamp":"2025-01-27T12:11:59Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4a14d0700eafa36dd3913955f2c0f839","kubernetes.io/config.mirror":"4a14d0700eafa36dd3913955f2c0f839","kubernetes.io/config.seen":"2025-01-27T12:11:59.106472767Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:11:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0127 12:36:48.855926    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:48.856366    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:48.856366    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:48.856366    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:48.859227    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:48.859227    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:48.859227    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:48.859227    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:48.859227    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:48.859864    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:48 GMT
	I0127 12:36:48.859864    9948 round_trippers.go:580]     Audit-Id: c3c29682-020b-4e2e-8559-65e52d1018d6
	I0127 12:36:48.859864    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:48.859898    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:48.860452    9948 pod_ready.go:93] pod "kube-controller-manager-multinode-659000" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:48.860523    9948 pod_ready.go:82] duration metric: took 8.0371ms for pod "kube-controller-manager-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:48.860523    9948 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pjhc8" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:48.860623    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pjhc8
	I0127 12:36:48.860702    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:48.860702    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:48.860702    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:48.864925    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:48.864925    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:48.864925    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:48.864925    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:48.864925    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:48.864925    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:48.864925    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:48 GMT
	I0127 12:36:48.864925    9948 round_trippers.go:580]     Audit-Id: 4bbd0dc8-ffb8-4d23-b1b0-4f7186552d1f
	I0127 12:36:48.864925    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pjhc8","generateName":"kube-proxy-","namespace":"kube-system","uid":"ddb6698c-b83d-4a49-9672-c894e87cbb66","resourceVersion":"1998","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d88eb776-b464-4f2b-8140-44249610a7fa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d88eb776-b464-4f2b-8140-44249610a7fa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6433 chars]
	I0127 12:36:48.864925    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:36:48.864925    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:48.864925    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:48.864925    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:48.868510    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:48.868510    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:48.868510    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:48.868510    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:48.868510    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:48.868510    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:48 GMT
	I0127 12:36:48.868510    9948 round_trippers.go:580]     Audit-Id: 51226419-bf8e-4030-9631-bc750d16862c
	I0127 12:36:48.868510    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:48.868510    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"2006","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4584 chars]
	I0127 12:36:48.869060    9948 pod_ready.go:98] node "multinode-659000-m02" hosting pod "kube-proxy-pjhc8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000-m02" has status "Ready":"Unknown"
	I0127 12:36:48.869060    9948 pod_ready.go:82] duration metric: took 8.537ms for pod "kube-proxy-pjhc8" in "kube-system" namespace to be "Ready" ...
	E0127 12:36:48.869060    9948 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-659000-m02" hosting pod "kube-proxy-pjhc8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000-m02" has status "Ready":"Unknown"
	I0127 12:36:48.869060    9948 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s46mv" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:49.021437    9948 request.go:632] Waited for 152.3757ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s46mv
	I0127 12:36:49.021738    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s46mv
	I0127 12:36:49.021788    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:49.021788    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:49.021788    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:49.025093    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:49.025093    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:49.025093    9948 round_trippers.go:580]     Audit-Id: 48044657-793a-45cb-b316-6a60c1c86261
	I0127 12:36:49.025093    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:49.025177    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:49.025177    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:49.025177    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:49.025177    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:49 GMT
	I0127 12:36:49.025529    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s46mv","generateName":"kube-proxy-","namespace":"kube-system","uid":"ae3b8daf-d674-4cfe-8652-cb5ff6ba8615","resourceVersion":"1898","creationTimestamp":"2025-01-27T12:12:03Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d88eb776-b464-4f2b-8140-44249610a7fa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d88eb776-b464-4f2b-8140-44249610a7fa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6405 chars]
	I0127 12:36:49.222034    9948 request.go:632] Waited for 196.1432ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:49.222441    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:49.222441    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:49.222559    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:49.222559    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:49.226229    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:49.226326    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:49.226326    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:49.226326    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:49.226326    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:49 GMT
	I0127 12:36:49.226326    9948 round_trippers.go:580]     Audit-Id: e759def6-4238-4b1c-9744-c9caa6aea460
	I0127 12:36:49.226326    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:49.226326    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:49.226885    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:49.227018    9948 pod_ready.go:93] pod "kube-proxy-s46mv" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:49.227018    9948 pod_ready.go:82] duration metric: took 357.9544ms for pod "kube-proxy-s46mv" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:49.227018    9948 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sk5js" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:49.421122    9948 request.go:632] Waited for 193.5557ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sk5js
	I0127 12:36:49.421609    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sk5js
	I0127 12:36:49.421609    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:49.421609    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:49.421609    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:49.426007    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:49.426090    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:49.426090    9948 round_trippers.go:580]     Audit-Id: a9f266d8-14da-4408-96ab-db5223079ceb
	I0127 12:36:49.426213    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:49.426213    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:49.426213    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:49.426236    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:49.426236    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:49 GMT
	I0127 12:36:49.426618    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sk5js","generateName":"kube-proxy-","namespace":"kube-system","uid":"ba679e1d-713c-4bd4-b267-2b887c1ac4df","resourceVersion":"1793","creationTimestamp":"2025-01-27T12:19:54Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d88eb776-b464-4f2b-8140-44249610a7fa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:19:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d88eb776-b464-4f2b-8140-44249610a7fa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6428 chars]
	I0127 12:36:49.621641    9948 request.go:632] Waited for 194.518ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.198.106:8443/api/v1/nodes/multinode-659000-m03
	I0127 12:36:49.621641    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000-m03
	I0127 12:36:49.621641    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:49.621641    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:49.621641    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:49.626140    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:49.626140    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:49.626140    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:49.626140    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:49 GMT
	I0127 12:36:49.626140    9948 round_trippers.go:580]     Audit-Id: d3690c0b-4873-477a-9ad1-7656393a8fd0
	I0127 12:36:49.626140    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:49.626140    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:49.626140    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:49.626140    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m03","uid":"0516f5fa-16ad-40aa-9616-01d098e46466","resourceVersion":"1941","creationTimestamp":"2025-01-27T12:31:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_31_04_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:31:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4400 chars]
	I0127 12:36:49.626894    9948 pod_ready.go:98] node "multinode-659000-m03" hosting pod "kube-proxy-sk5js" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000-m03" has status "Ready":"Unknown"
	I0127 12:36:49.626962    9948 pod_ready.go:82] duration metric: took 399.8713ms for pod "kube-proxy-sk5js" in "kube-system" namespace to be "Ready" ...
	E0127 12:36:49.626962    9948 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-659000-m03" hosting pod "kube-proxy-sk5js" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000-m03" has status "Ready":"Unknown"
	I0127 12:36:49.626962    9948 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:49.821366    9948 request.go:632] Waited for 194.3415ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-659000
	I0127 12:36:49.821750    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-659000
	I0127 12:36:49.821750    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:49.821750    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:49.821750    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:49.826259    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:49.826332    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:49.826332    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:49 GMT
	I0127 12:36:49.826395    9948 round_trippers.go:580]     Audit-Id: 6c855360-0c03-4c39-a29d-4242802315c2
	I0127 12:36:49.826395    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:49.826395    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:49.826395    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:49.826395    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:49.826725    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-659000","namespace":"kube-system","uid":"52b91964-a331-4925-9e07-c8df32b4176d","resourceVersion":"1925","creationTimestamp":"2025-01-27T12:11:58Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e6c90fc43fa6c0754218ff1c4162045d","kubernetes.io/config.mirror":"e6c90fc43fa6c0754218ff1c4162045d","kubernetes.io/config.seen":"2025-01-27T12:11:51.419790825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:11:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5568 chars]
	I0127 12:36:50.021561    9948 request.go:632] Waited for 194.5366ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:50.021561    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:50.021561    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:50.021561    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:50.021561    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:50.029232    9948 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 12:36:50.029299    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:50.029299    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:50.029299    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:50 GMT
	I0127 12:36:50.029299    9948 round_trippers.go:580]     Audit-Id: ba8f26d1-188c-464b-b234-9de842093182
	I0127 12:36:50.029299    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:50.029299    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:50.029299    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:50.029299    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:50.030220    9948 pod_ready.go:93] pod "kube-scheduler-multinode-659000" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:50.030220    9948 pod_ready.go:82] duration metric: took 403.2533ms for pod "kube-scheduler-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:50.030220    9948 pod_ready.go:39] duration metric: took 16.7212511s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:36:50.030220    9948 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:36:50.042588    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 12:36:50.073589    9948 command_runner.go:130] > ea993630a310
	I0127 12:36:50.073699    9948 logs.go:282] 1 containers: [ea993630a310]
	I0127 12:36:50.083119    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 12:36:50.108205    9948 command_runner.go:130] > 0ef2a3b50bae
	I0127 12:36:50.108275    9948 logs.go:282] 1 containers: [0ef2a3b50bae]
	I0127 12:36:50.121182    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 12:36:50.150046    9948 command_runner.go:130] > b3a9ed6e130c
	I0127 12:36:50.150046    9948 command_runner.go:130] > f818dd15d8b0
	I0127 12:36:50.150046    9948 logs.go:282] 2 containers: [b3a9ed6e130c f818dd15d8b0]
	I0127 12:36:50.159402    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 12:36:50.182881    9948 command_runner.go:130] > ed51c7eaa966
	I0127 12:36:50.182881    9948 command_runner.go:130] > a16e06a03860
	I0127 12:36:50.184878    9948 logs.go:282] 2 containers: [ed51c7eaa966 a16e06a03860]
	I0127 12:36:50.194142    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 12:36:50.215471    9948 command_runner.go:130] > 0283b35dee3c
	I0127 12:36:50.215471    9948 command_runner.go:130] > bbec7ccef7da
	I0127 12:36:50.218082    9948 logs.go:282] 2 containers: [0283b35dee3c bbec7ccef7da]
	I0127 12:36:50.227809    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 12:36:50.253979    9948 command_runner.go:130] > 8d4872cda28d
	I0127 12:36:50.253979    9948 command_runner.go:130] > e07a66f8f619
	I0127 12:36:50.253979    9948 logs.go:282] 2 containers: [8d4872cda28d e07a66f8f619]
	I0127 12:36:50.263626    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0127 12:36:50.286152    9948 command_runner.go:130] > 373bec67270f
	I0127 12:36:50.286152    9948 command_runner.go:130] > d758000dda95
	I0127 12:36:50.287448    9948 logs.go:282] 2 containers: [373bec67270f d758000dda95]
	I0127 12:36:50.287542    9948 logs.go:123] Gathering logs for Docker ...
	I0127 12:36:50.287542    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0127 12:36:50.318623    9948 command_runner.go:130] > Jan 27 12:34:11 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0127 12:36:50.318623    9948 command_runner.go:130] > Jan 27 12:34:11 minikube cri-dockerd[223]: time="2025-01-27T12:34:11Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0127 12:36:50.318697    9948 command_runner.go:130] > Jan 27 12:34:11 minikube cri-dockerd[223]: time="2025-01-27T12:34:11Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0127 12:36:50.318697    9948 command_runner.go:130] > Jan 27 12:34:11 minikube cri-dockerd[223]: time="2025-01-27T12:34:11Z" level=info msg="Start docker client with request timeout 0s"
	I0127 12:36:50.318697    9948 command_runner.go:130] > Jan 27 12:34:11 minikube cri-dockerd[223]: time="2025-01-27T12:34:11Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0127 12:36:50.318764    9948 command_runner.go:130] > Jan 27 12:34:11 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:50.318764    9948 command_runner.go:130] > Jan 27 12:34:11 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0127 12:36:50.318764    9948 command_runner.go:130] > Jan 27 12:34:11 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0127 12:36:50.318824    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0127 12:36:50.318824    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0127 12:36:50.318824    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0127 12:36:50.318824    9948 command_runner.go:130] > Jan 27 12:34:14 minikube cri-dockerd[404]: time="2025-01-27T12:34:14Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0127 12:36:50.318824    9948 command_runner.go:130] > Jan 27 12:34:14 minikube cri-dockerd[404]: time="2025-01-27T12:34:14Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0127 12:36:50.318882    9948 command_runner.go:130] > Jan 27 12:34:14 minikube cri-dockerd[404]: time="2025-01-27T12:34:14Z" level=info msg="Start docker client with request timeout 0s"
	I0127 12:36:50.318953    9948 command_runner.go:130] > Jan 27 12:34:14 minikube cri-dockerd[404]: time="2025-01-27T12:34:14Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0127 12:36:50.318953    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:50.318953    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0127 12:36:50.319037    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0127 12:36:50.319037    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0127 12:36:50.319037    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0127 12:36:50.319064    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0127 12:36:50.319064    9948 command_runner.go:130] > Jan 27 12:34:16 minikube cri-dockerd[425]: time="2025-01-27T12:34:16Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0127 12:36:50.319064    9948 command_runner.go:130] > Jan 27 12:34:16 minikube cri-dockerd[425]: time="2025-01-27T12:34:16Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0127 12:36:50.319115    9948 command_runner.go:130] > Jan 27 12:34:16 minikube cri-dockerd[425]: time="2025-01-27T12:34:16Z" level=info msg="Start docker client with request timeout 0s"
	I0127 12:36:50.319115    9948 command_runner.go:130] > Jan 27 12:34:16 minikube cri-dockerd[425]: time="2025-01-27T12:34:16Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0127 12:36:50.319173    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:50.319173    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0127 12:36:50.319173    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0127 12:36:50.319173    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0127 12:36:50.319173    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0127 12:36:50.319244    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0127 12:36:50.319244    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0127 12:36:50.319288    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0127 12:36:50.319288    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 systemd[1]: Starting Docker Application Container Engine...
	I0127 12:36:50.319382    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[653]: time="2025-01-27T12:35:01.316616305Z" level=info msg="Starting up"
	I0127 12:36:50.319382    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[653]: time="2025-01-27T12:35:01.317424338Z" level=info msg="containerd not running, starting managed containerd"
	I0127 12:36:50.319417    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[653]: time="2025-01-27T12:35:01.318870498Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=659
	I0127 12:36:50.319454    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.350184287Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0127 12:36:50.319454    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374094572Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0127 12:36:50.319501    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374181575Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0127 12:36:50.319501    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374315681Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0127 12:36:50.319557    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374337282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:50.319557    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374861203Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:50.319557    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374889804Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:50.319557    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375040811Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:50.319642    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375239819Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:50.319667    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375267320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:50.319709    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375281220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:50.319709    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375833643Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:50.319709    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.376559373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:50.319760    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.379449292Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:50.319760    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.379538296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:50.319876    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.379661901Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:50.319981    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.379800807Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0127 12:36:50.319981    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.380313228Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0127 12:36:50.319981    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.380441533Z" level=info msg="metadata content store policy set" policy=shared
	I0127 12:36:50.320100    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.385960360Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0127 12:36:50.320100    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386099266Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0127 12:36:50.320100    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386121867Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0127 12:36:50.320100    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386137768Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0127 12:36:50.320184    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386151968Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0127 12:36:50.320184    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386229971Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0127 12:36:50.320184    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386475981Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0127 12:36:50.320184    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386600687Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0127 12:36:50.320269    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386685890Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0127 12:36:50.320269    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386757893Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0127 12:36:50.320365    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386815695Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.320365    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386833196Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.320365    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386854497Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.320427    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386882698Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.320427    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386897399Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.320427    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386908999Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.320427    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386920500Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.320427    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386931000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.320512    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386948401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320512    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386962701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320538    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387079606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320578    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387099107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320578    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387131708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320578    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387149509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320660    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387164010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320660    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387179110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320660    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387194311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320660    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387212812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320660    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387227412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320743    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387242613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320769    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387257314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320769    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387275514Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0127 12:36:50.320808    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387300315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320808    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387352418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320808    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387385019Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0127 12:36:50.320859    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387423920Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0127 12:36:50.320859    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387443921Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0127 12:36:50.320914    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387454422Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0127 12:36:50.320914    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387465222Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0127 12:36:50.320967    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387473923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320967    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387486423Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0127 12:36:50.321041    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387496523Z" level=info msg="NRI interface is disabled by configuration."
	I0127 12:36:50.321041    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.388077647Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0127 12:36:50.321041    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.388176351Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0127 12:36:50.321093    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.388221553Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0127 12:36:50.321093    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.388239554Z" level=info msg="containerd successfully booted in 0.040630s"
	I0127 12:36:50.321093    9948 command_runner.go:130] > Jan 27 12:35:02 multinode-659000 dockerd[653]: time="2025-01-27T12:35:02.375461301Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0127 12:36:50.321093    9948 command_runner.go:130] > Jan 27 12:35:02 multinode-659000 dockerd[653]: time="2025-01-27T12:35:02.619440119Z" level=info msg="Loading containers: start."
	I0127 12:36:50.321152    9948 command_runner.go:130] > Jan 27 12:35:02 multinode-659000 dockerd[653]: time="2025-01-27T12:35:02.931712674Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0127 12:36:50.321206    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.079754338Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0127 12:36:50.321206    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.199112944Z" level=info msg="Loading containers: done."
	I0127 12:36:50.321206    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.227370410Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I0127 12:36:50.321206    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.227394111Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I0127 12:36:50.321264    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.227415612Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0127 12:36:50.321264    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.227924231Z" level=info msg="Daemon has completed initialization"
	I0127 12:36:50.321264    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.267619030Z" level=info msg="API listen on /var/run/docker.sock"
	I0127 12:36:50.321317    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.267851638Z" level=info msg="API listen on [::]:2376"
	I0127 12:36:50.321317    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 systemd[1]: Started Docker Application Container Engine.
	I0127 12:36:50.321317    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.208684124Z" level=info msg="Processing signal 'terminated'"
	I0127 12:36:50.321317    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.210887831Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0127 12:36:50.321317    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.211188432Z" level=info msg="Daemon shutdown complete"
	I0127 12:36:50.321399    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.211249132Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0127 12:36:50.321424    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.211349733Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0127 12:36:50.321424    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 systemd[1]: Stopping Docker Application Container Engine...
	I0127 12:36:50.321424    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 systemd[1]: docker.service: Deactivated successfully.
	I0127 12:36:50.321464    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 systemd[1]: Stopped Docker Application Container Engine.
	I0127 12:36:50.321464    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 systemd[1]: Starting Docker Application Container Engine...
	I0127 12:36:50.321464    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:29.270852796Z" level=info msg="Starting up"
	I0127 12:36:50.321514    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:29.271817099Z" level=info msg="containerd not running, starting managed containerd"
	I0127 12:36:50.321514    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:29.272921603Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1109
	I0127 12:36:50.321514    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.304741210Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0127 12:36:50.321590    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329258592Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0127 12:36:50.321590    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329353092Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0127 12:36:50.321590    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329390892Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0127 12:36:50.321651    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329406192Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:50.321651    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329428593Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:50.321651    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329441293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:50.321726    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329563193Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:50.321726    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329667793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:50.321780    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329687993Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:50.321780    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329698693Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:50.321780    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329723194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:50.321780    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329854194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:50.321886    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.332844104Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:50.321886    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.332945004Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:50.321950    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333117005Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:50.321950    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333187905Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0127 12:36:50.321950    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333222205Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0127 12:36:50.322003    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333244905Z" level=info msg="metadata content store policy set" policy=shared
	I0127 12:36:50.322003    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333669407Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0127 12:36:50.322003    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333741907Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0127 12:36:50.322060    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333760007Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0127 12:36:50.322060    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333804107Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0127 12:36:50.322060    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333825507Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0127 12:36:50.322113    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333876808Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0127 12:36:50.322113    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334348509Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0127 12:36:50.322113    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334487410Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0127 12:36:50.322170    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334670410Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0127 12:36:50.322170    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334694510Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0127 12:36:50.322223    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334722510Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.322223    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334740210Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.322223    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334754110Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.322223    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334768211Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.322288    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334783611Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.322288    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334797111Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.322339    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334827611Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.322339    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334839711Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.322339    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334900511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322339    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334918411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322339    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334939711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322421    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334956111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322443    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334972911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322443    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335000311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322483    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335303412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322483    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335328412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322483    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335345712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322483    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335365113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322538    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335379713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322538    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335394013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322538    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335408713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322593    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335432513Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0127 12:36:50.322593    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335458213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322593    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335473813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322649    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335509613Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0127 12:36:50.322649    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335706914Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0127 12:36:50.322649    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335751914Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0127 12:36:50.322724    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335766514Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0127 12:36:50.322822    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335779214Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0127 12:36:50.322877    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335790814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322877    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335808914Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0127 12:36:50.322877    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335823714Z" level=info msg="NRI interface is disabled by configuration."
	I0127 12:36:50.322951    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.336050915Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0127 12:36:50.322951    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.336227915Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0127 12:36:50.322951    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.336312916Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0127 12:36:50.323006    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.336356016Z" level=info msg="containerd successfully booted in 0.033394s"
	I0127 12:36:50.323006    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.313483202Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0127 12:36:50.323006    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.352802934Z" level=info msg="Loading containers: start."
	I0127 12:36:50.323068    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.586901421Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0127 12:36:50.323068    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.690006868Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0127 12:36:50.323128    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.804531453Z" level=info msg="Loading containers: done."
	I0127 12:36:50.323128    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.832567747Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0127 12:36:50.323128    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.832684748Z" level=info msg="Daemon has completed initialization"
	I0127 12:36:50.323128    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.868895669Z" level=info msg="API listen on /var/run/docker.sock"
	I0127 12:36:50.323189    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 systemd[1]: Started Docker Application Container Engine.
	I0127 12:36:50.323189    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.869822273Z" level=info msg="API listen on [::]:2376"
	I0127 12:36:50.323189    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0127 12:36:50.323189    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0127 12:36:50.323248    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0127 12:36:50.323248    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Start docker client with request timeout 0s"
	I0127 12:36:50.323248    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0127 12:36:50.323248    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Loaded network plugin cni"
	I0127 12:36:50.323316    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0127 12:36:50.323316    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0127 12:36:50.323316    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0127 12:36:50.323375    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0127 12:36:50.323375    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Start cri-dockerd grpc backend"
	I0127 12:36:50.323375    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0127 12:36:50.323433    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:36Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-58667487b6-2jq9j_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"4c82c0ec4aeaa9b21462a8248326ae982d6f7a0aee31347f1a58d216f0335177\""
	I0127 12:36:50.323433    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:36Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-668d6bf9bc-2qw6w_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"4a53e133a1cd6ab9514cb15ac3c4f1d5683d17008b482cebb08bf4809e060709\""
	I0127 12:36:50.323493    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.148610487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.323493    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.149713190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.323550    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.149731191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.323550    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.149823291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.323604    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.227312151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.323604    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.227946754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.323604    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.228465355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.323663    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.229058857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.323663    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b770a357d98307d140bf1525f91cca5fa9278f7f9428b9b956db31e6a36de7f2/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:50.323717    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.326758786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.323717    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.326897686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.323717    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.327082287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.323772    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.327397788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.323772    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.340486032Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.323823    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.340542232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.323823    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.340557232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.323823    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.340640833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.323899    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/910315897d84204b3db03c56eaeac0c855a23f6250a406220a840c10e2dad7a7/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:50.323899    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5601285bb260a8ced44a77e9dbb10f08580841c917885470ec5941525f08ee76/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:50.323899    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cdf534e99b2bbcc52d3bf2ce73ef5d4299b5264cf0a050fa21ff7f6fe2bb3b2a/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:50.323955    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.671974447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.323955    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.672075247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.323955    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.672094947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.323955    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.673787353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.324029    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.761333147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.324029    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.761791949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.324084    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.761989149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.324084    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.763491554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.324084    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.875104030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.324141    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.875307231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.324141    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.879314144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.324193    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.879751245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.324193    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.905404632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.324269    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.905473732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.324269    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.905487532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.324269    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.905580032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.324347    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:41Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0127 12:36:50.324347    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:42.944884578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.324347    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:42.944962279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.324437    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:42.944975379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.324437    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:42.945417180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.324488    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.028307259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.324488    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.028541060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.324488    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.028779960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.324625    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.029212562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.324696    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.033020375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.324696    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.033338176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.324696    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.033463276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.324763    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.033775977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.324763    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/34d579bb511fec290478f20b13002063b43c1a71bd6f2f45f1d83bbd8ac971ab/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:50.324822    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b613e9a7a356580fd5381e358408317fd6120a119c23f3f196adda302e5ca97f/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:50.324822    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d43e4cc62e0877d4b65191623d58195cd33c60eff33c6e49e605f69620d5115f/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:50.324878    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.564400062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.324878    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.564959364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.324972    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.565260665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.324972    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.565864167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.325051    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.593549260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.325051    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.594548363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.325051    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.594809964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.325117    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.595677067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.325117    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.831064858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.325164    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.831237859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.325164    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.831252459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.325214    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.831462360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.325214    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:14.113708902Z" level=info msg="shim disconnected" id=9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f namespace=moby
	I0127 12:36:50.325214    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:14.113811702Z" level=warning msg="cleaning up after shim disconnected" id=9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f namespace=moby
	I0127 12:36:50.325290    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:14.113825002Z" level=info msg="cleaning up dead shim" namespace=moby
	I0127 12:36:50.325290    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 dockerd[1103]: time="2025-01-27T12:36:14.115914814Z" level=info msg="ignoring event" container=9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0127 12:36:50.325340    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:27.602318882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.325340    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:27.604079090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.325340    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:27.604098490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.325388    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:27.604656892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.325388    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.795612113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.325443    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.795786714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.325490    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.796654617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.325490    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.796995818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.325564    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.861006350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.325564    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.861082751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.325619    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.861094651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.325619    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.861334452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.325653    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:36:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6b22dbb5ef3e0d283203499fffad001c9c20c643564a55e5bfa5d6352f80e178/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:50.325683    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:36:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ef504f99724cba01531b3894329439ae069a4ccac272e31bfac333cc24e62c53/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0127 12:36:50.325683    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.321502068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.325683    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.321825070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.325683    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.321903471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.325683    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.322491776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.325683    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.384958874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.325683    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.385201176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.325683    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.385326577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.325683    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.385735080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.352785    9948 logs.go:123] Gathering logs for etcd [0ef2a3b50bae] ...
	I0127 12:36:50.352785    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ef2a3b50bae"
	I0127 12:36:50.378318    9948 command_runner.go:130] ! {"level":"warn","ts":"2025-01-27T12:35:38.248296Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0127 12:36:50.379336    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.248523Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.29.198.106:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.29.198.106:2380","--initial-cluster=multinode-659000=https://172.29.198.106:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.29.198.106:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.29.198.106:2380","--name=multinode-659000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0127 12:36:50.379336    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.249804Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0127 12:36:50.379435    9948 command_runner.go:130] ! {"level":"warn","ts":"2025-01-27T12:35:38.249933Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0127 12:36:50.379435    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.249951Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://172.29.198.106:2380"]}
	I0127 12:36:50.379435    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.250358Z","caller":"embed/etcd.go:497","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0127 12:36:50.379506    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.255871Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.29.198.106:2379"]}
	I0127 12:36:50.379572    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.258341Z","caller":"embed/etcd.go:311","msg":"starting an etcd server","etcd-version":"3.5.16","git-sha":"f20bbad","go-version":"go1.22.7","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-659000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.29.198.106:2380"],"listen-peer-urls":["https://172.29.198.106:2380"],"advertise-client-urls":["https://172.29.198.106:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.198.106:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initi
al-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0127 12:36:50.379656    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.282453Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"23.428079ms"}
	I0127 12:36:50.379714    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.322950Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0127 12:36:50.379714    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.352706Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"d020e240c474bd89","local-member-id":"925e6945be3a5b5b","commit-index":2090}
	I0127 12:36:50.379770    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.354002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b switched to configuration voters=()"}
	I0127 12:36:50.379770    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.354079Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b became follower at term 2"}
	I0127 12:36:50.379827    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.354103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 925e6945be3a5b5b [peers: [], term: 2, commit: 2090, applied: 0, lastindex: 2090, lastterm: 2]"}
	I0127 12:36:50.379827    9948 command_runner.go:130] ! {"level":"warn","ts":"2025-01-27T12:35:38.367343Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0127 12:36:50.379827    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.371532Z","caller":"mvcc/kvstore.go:346","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1388}
	I0127 12:36:50.379827    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.377112Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":1808}
	I0127 12:36:50.379827    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.386775Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0127 12:36:50.379914    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.395908Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"925e6945be3a5b5b","timeout":"7s"}
	I0127 12:36:50.379945    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.396497Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"925e6945be3a5b5b"}
	I0127 12:36:50.379945    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.396684Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"925e6945be3a5b5b","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	I0127 12:36:50.379945    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.396970Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	I0127 12:36:50.380016    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.399309Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0127 12:36:50.380045    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.401105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b switched to configuration voters=(10546983125613435739)"}
	I0127 12:36:50.380045    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.400045Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0127 12:36:50.380088    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.404834Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0127 12:36:50.380088    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.404888Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0127 12:36:50.380088    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.405566Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d020e240c474bd89","local-member-id":"925e6945be3a5b5b","added-peer-id":"925e6945be3a5b5b","added-peer-peer-urls":["https://172.29.204.17:2380"]}
	I0127 12:36:50.380143    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.405716Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d020e240c474bd89","local-member-id":"925e6945be3a5b5b","cluster-version":"3.5"}
	I0127 12:36:50.380143    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.405754Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0127 12:36:50.380203    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.407643Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0127 12:36:50.380255    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.408091Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"925e6945be3a5b5b","initial-advertise-peer-urls":["https://172.29.198.106:2380"],"listen-peer-urls":["https://172.29.198.106:2380"],"advertise-client-urls":["https://172.29.198.106:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.198.106:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0127 12:36:50.380310    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.408386Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0127 12:36:50.380310    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.408686Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"172.29.198.106:2380"}
	I0127 12:36:50.380310    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.408809Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"172.29.198.106:2380"}
	I0127 12:36:50.380373    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.355207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b is starting a new election at term 2"}
	I0127 12:36:50.380373    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.355615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b became pre-candidate at term 2"}
	I0127 12:36:50.380373    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.355770Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b received MsgPreVoteResp from 925e6945be3a5b5b at term 2"}
	I0127 12:36:50.380435    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.355926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b became candidate at term 3"}
	I0127 12:36:50.380481    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.356088Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b received MsgVoteResp from 925e6945be3a5b5b at term 3"}
	I0127 12:36:50.380481    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.356235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b became leader at term 3"}
	I0127 12:36:50.380481    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.356449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 925e6945be3a5b5b elected leader 925e6945be3a5b5b at term 3"}
	I0127 12:36:50.380481    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.368540Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"925e6945be3a5b5b","local-member-attributes":"{Name:multinode-659000 ClientURLs:[https://172.29.198.106:2379]}","request-path":"/0/members/925e6945be3a5b5b/attributes","cluster-id":"d020e240c474bd89","publish-timeout":"7s"}
	I0127 12:36:50.380580    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.369045Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0127 12:36:50.380580    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.371833Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0127 12:36:50.380580    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.372238Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0127 12:36:50.380580    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.374158Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0127 12:36:50.380580    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.383680Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0127 12:36:50.380580    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.391404Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0127 12:36:50.380580    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.392982Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.29.198.106:2379"}
	I0127 12:36:50.380683    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.399505Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0127 12:36:50.387951    9948 logs.go:123] Gathering logs for kube-scheduler [ed51c7eaa966] ...
	I0127 12:36:50.387951    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed51c7eaa966"
	I0127 12:36:50.412266    9948 command_runner.go:130] ! I0127 12:35:39.285954       1 serving.go:386] Generated self-signed cert in-memory
	I0127 12:36:50.413250    9948 command_runner.go:130] ! W0127 12:35:41.361191       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0127 12:36:50.413402    9948 command_runner.go:130] ! W0127 12:35:41.363231       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0127 12:36:50.413470    9948 command_runner.go:130] ! W0127 12:35:41.363467       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0127 12:36:50.413470    9948 command_runner.go:130] ! W0127 12:35:41.363598       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 12:36:50.413524    9948 command_runner.go:130] ! I0127 12:35:41.458309       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 12:36:50.413524    9948 command_runner.go:130] ! I0127 12:35:41.458594       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:50.413575    9948 command_runner.go:130] ! I0127 12:35:41.465036       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 12:36:50.413575    9948 command_runner.go:130] ! I0127 12:35:41.465587       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0127 12:36:50.413575    9948 command_runner.go:130] ! I0127 12:35:41.466480       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:36:50.413617    9948 command_runner.go:130] ! I0127 12:35:41.466554       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:50.413617    9948 command_runner.go:130] ! I0127 12:35:41.567642       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:36:50.415904    9948 logs.go:123] Gathering logs for kube-proxy [bbec7ccef7da] ...
	I0127 12:36:50.415904    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbec7ccef7da"
	I0127 12:36:50.441771    9948 command_runner.go:130] ! I0127 12:12:05.290111       1 server_linux.go:66] "Using iptables proxy"
	I0127 12:36:50.441771    9948 command_runner.go:130] ! E0127 12:12:05.321300       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0127 12:36:50.441771    9948 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0127 12:36:50.441771    9948 command_runner.go:130] ! 	add table ip kube-proxy
	I0127 12:36:50.441771    9948 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:50.441771    9948 command_runner.go:130] !  >
	I0127 12:36:50.441771    9948 command_runner.go:130] ! E0127 12:12:05.352123       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0127 12:36:50.441771    9948 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0127 12:36:50.441771    9948 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0127 12:36:50.441771    9948 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:50.441771    9948 command_runner.go:130] !  >
	I0127 12:36:50.441771    9948 command_runner.go:130] ! I0127 12:12:05.378799       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.204.17"]
	I0127 12:36:50.441771    9948 command_runner.go:130] ! E0127 12:12:05.378872       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 12:36:50.441771    9948 command_runner.go:130] ! I0127 12:12:05.470419       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 12:36:50.442748    9948 command_runner.go:130] ! I0127 12:12:05.470552       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 12:36:50.442748    9948 command_runner.go:130] ! I0127 12:12:05.470596       1 server_linux.go:170] "Using iptables Proxier"
	I0127 12:36:50.442748    9948 command_runner.go:130] ! I0127 12:12:05.475557       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 12:36:50.442748    9948 command_runner.go:130] ! I0127 12:12:05.476697       1 server.go:497] "Version info" version="v1.32.1"
	I0127 12:36:50.442748    9948 command_runner.go:130] ! I0127 12:12:05.476717       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:50.442748    9948 command_runner.go:130] ! I0127 12:12:05.478788       1 config.go:199] "Starting service config controller"
	I0127 12:36:50.442748    9948 command_runner.go:130] ! I0127 12:12:05.478844       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 12:36:50.442748    9948 command_runner.go:130] ! I0127 12:12:05.478916       1 config.go:105] "Starting endpoint slice config controller"
	I0127 12:36:50.442748    9948 command_runner.go:130] ! I0127 12:12:05.479018       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 12:36:50.442930    9948 command_runner.go:130] ! I0127 12:12:05.480053       1 config.go:329] "Starting node config controller"
	I0127 12:36:50.442930    9948 command_runner.go:130] ! I0127 12:12:05.480113       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 12:36:50.442930    9948 command_runner.go:130] ! I0127 12:12:05.579605       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 12:36:50.442930    9948 command_runner.go:130] ! I0127 12:12:05.579669       1 shared_informer.go:320] Caches are synced for service config
	I0127 12:36:50.442930    9948 command_runner.go:130] ! I0127 12:12:05.580463       1 shared_informer.go:320] Caches are synced for node config
	I0127 12:36:50.445304    9948 logs.go:123] Gathering logs for kindnet [d758000dda95] ...
	I0127 12:36:50.445304    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d758000dda95"
	I0127 12:36:50.477702    9948 command_runner.go:130] ! I0127 12:22:14.854106       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:14.855096       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:14.855184       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:24.859265       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:24.859464       1 main.go:301] handling current node
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:24.859638       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:24.859681       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:24.860150       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:24.860242       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:34.860201       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:34.860282       1 main.go:301] handling current node
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:34.860531       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:34.860551       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:34.861114       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:34.861204       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:44.853677       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:44.853737       1 main.go:301] handling current node
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:44.853761       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:44.853838       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:44.855661       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:44.855749       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:54.856510       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:54.856632       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:54.857002       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:54.857030       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:54.857252       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:54.857371       1 main.go:301] handling current node
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:04.859476       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:04.859579       1 main.go:301] handling current node
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:04.859615       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:04.859623       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:04.859972       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:04.859987       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:14.853396       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:14.853515       1 main.go:301] handling current node
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:14.853537       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:14.853546       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:14.853802       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:14.853843       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:24.853600       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:24.853883       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:24.854392       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:24.854484       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:24.854688       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:24.854773       1 main.go:301] handling current node
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:34.853542       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:34.853600       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:34.854132       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:34.854286       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:34.854787       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:34.854920       1 main.go:301] handling current node
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:44.856707       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:44.856833       1 main.go:301] handling current node
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:44.856869       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:44.856877       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:44.857371       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:44.857460       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:54.853590       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:54.853737       1 main.go:301] handling current node
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:54.853759       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:54.853768       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:54.854333       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:54.854403       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:24:04.862983       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:24:04.863248       1 main.go:301] handling current node
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:24:04.863599       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:24:04.863808       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:24:04.864418       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:24:04.864558       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.479690    9948 command_runner.go:130] ! I0127 12:24:14.854114       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.479690    9948 command_runner.go:130] ! I0127 12:24:14.854152       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.479690    9948 command_runner.go:130] ! I0127 12:24:14.854412       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.479787    9948 command_runner.go:130] ! I0127 12:24:14.854490       1 main.go:301] handling current node
	I0127 12:36:50.479787    9948 command_runner.go:130] ! I0127 12:24:14.854619       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.479787    9948 command_runner.go:130] ! I0127 12:24:14.854711       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.479787    9948 command_runner.go:130] ! I0127 12:24:24.857372       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.479787    9948 command_runner.go:130] ! I0127 12:24:24.857503       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.479787    9948 command_runner.go:130] ! I0127 12:24:24.857861       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.479889    9948 command_runner.go:130] ! I0127 12:24:24.857991       1 main.go:301] handling current node
	I0127 12:36:50.479889    9948 command_runner.go:130] ! I0127 12:24:24.858058       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.479889    9948 command_runner.go:130] ! I0127 12:24:24.858126       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.479944    9948 command_runner.go:130] ! I0127 12:24:34.854371       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.479944    9948 command_runner.go:130] ! I0127 12:24:34.854425       1 main.go:301] handling current node
	I0127 12:36:50.479944    9948 command_runner.go:130] ! I0127 12:24:34.854444       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.479944    9948 command_runner.go:130] ! I0127 12:24:34.854451       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.479944    9948 command_runner.go:130] ! I0127 12:24:34.855276       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480010    9948 command_runner.go:130] ! I0127 12:24:34.855359       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480010    9948 command_runner.go:130] ! I0127 12:24:44.862967       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480010    9948 command_runner.go:130] ! I0127 12:24:44.863069       1 main.go:301] handling current node
	I0127 12:36:50.480066    9948 command_runner.go:130] ! I0127 12:24:44.863118       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.480066    9948 command_runner.go:130] ! I0127 12:24:44.863132       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.480066    9948 command_runner.go:130] ! I0127 12:24:44.863438       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480066    9948 command_runner.go:130] ! I0127 12:24:44.863559       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480131    9948 command_runner.go:130] ! I0127 12:24:54.856232       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480170    9948 command_runner.go:130] ! I0127 12:24:54.856343       1 main.go:301] handling current node
	I0127 12:36:50.480170    9948 command_runner.go:130] ! I0127 12:24:54.856417       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.480170    9948 command_runner.go:130] ! I0127 12:24:54.856429       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.480227    9948 command_runner.go:130] ! I0127 12:24:54.857056       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480227    9948 command_runner.go:130] ! I0127 12:24:54.857188       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480269    9948 command_runner.go:130] ! I0127 12:25:04.853438       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480269    9948 command_runner.go:130] ! I0127 12:25:04.853551       1 main.go:301] handling current node
	I0127 12:36:50.480269    9948 command_runner.go:130] ! I0127 12:25:04.853573       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.480303    9948 command_runner.go:130] ! I0127 12:25:04.853581       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.480303    9948 command_runner.go:130] ! I0127 12:25:04.853903       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480356    9948 command_runner.go:130] ! I0127 12:25:04.853979       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480356    9948 command_runner.go:130] ! I0127 12:25:14.854463       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480356    9948 command_runner.go:130] ! I0127 12:25:14.854571       1 main.go:301] handling current node
	I0127 12:36:50.480395    9948 command_runner.go:130] ! I0127 12:25:14.854614       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.480395    9948 command_runner.go:130] ! I0127 12:25:14.854630       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.480395    9948 command_runner.go:130] ! I0127 12:25:14.855124       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480432    9948 command_runner.go:130] ! I0127 12:25:14.855157       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480432    9948 command_runner.go:130] ! I0127 12:25:24.853742       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480432    9948 command_runner.go:130] ! I0127 12:25:24.853838       1 main.go:301] handling current node
	I0127 12:36:50.480480    9948 command_runner.go:130] ! I0127 12:25:24.853859       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.480480    9948 command_runner.go:130] ! I0127 12:25:24.853866       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.480480    9948 command_runner.go:130] ! I0127 12:25:24.854822       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480517    9948 command_runner.go:130] ! I0127 12:25:24.854982       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480517    9948 command_runner.go:130] ! I0127 12:25:34.853374       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.480517    9948 command_runner.go:130] ! I0127 12:25:34.853516       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.480564    9948 command_runner.go:130] ! I0127 12:25:34.853756       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480564    9948 command_runner.go:130] ! I0127 12:25:34.853919       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480600    9948 command_runner.go:130] ! I0127 12:25:34.854285       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480600    9948 command_runner.go:130] ! I0127 12:25:34.854360       1 main.go:301] handling current node
	I0127 12:36:50.480600    9948 command_runner.go:130] ! I0127 12:25:44.855075       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480648    9948 command_runner.go:130] ! I0127 12:25:44.855182       1 main.go:301] handling current node
	I0127 12:36:50.480648    9948 command_runner.go:130] ! I0127 12:25:44.855201       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.480648    9948 command_runner.go:130] ! I0127 12:25:44.855209       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.480684    9948 command_runner.go:130] ! I0127 12:25:44.856108       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480717    9948 command_runner.go:130] ! I0127 12:25:44.856191       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480717    9948 command_runner.go:130] ! I0127 12:25:54.854358       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480717    9948 command_runner.go:130] ! I0127 12:25:54.854550       1 main.go:301] handling current node
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:25:54.854584       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:25:54.854606       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:25:54.854829       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:25:54.854893       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:04.853425       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:04.853480       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:04.854150       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:04.854221       1 main.go:301] handling current node
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:04.854322       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:04.854350       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:14.853895       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:14.854577       1 main.go:301] handling current node
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:14.854615       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:14.854639       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:14.856224       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:14.856319       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:24.858046       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:24.858200       1 main.go:301] handling current node
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:24.858527       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:24.858599       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:24.859022       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:24.859118       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:34.853783       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:34.853853       1 main.go:301] handling current node
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:34.853871       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:34.853878       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:34.854193       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:34.854260       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:44.856492       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:44.856552       1 main.go:301] handling current node
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:44.856569       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:44.856575       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:44.857163       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:44.857246       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:54.858285       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:54.858431       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:54.859101       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:54.859322       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:54.859474       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:54.859544       1 main.go:301] handling current node
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:27:04.858831       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:27:04.858967       1 main.go:301] handling current node
	I0127 12:36:50.481283    9948 command_runner.go:130] ! I0127 12:27:04.859484       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.481283    9948 command_runner.go:130] ! I0127 12:27:04.859592       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.481340    9948 command_runner.go:130] ! I0127 12:27:04.860213       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.481340    9948 command_runner.go:130] ! I0127 12:27:04.860314       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.481340    9948 command_runner.go:130] ! I0127 12:27:14.854313       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.481340    9948 command_runner.go:130] ! I0127 12:27:14.854366       1 main.go:301] handling current node
	I0127 12:36:50.481410    9948 command_runner.go:130] ! I0127 12:27:14.854386       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.481410    9948 command_runner.go:130] ! I0127 12:27:14.854394       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.481459    9948 command_runner.go:130] ! I0127 12:27:14.854883       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.481459    9948 command_runner.go:130] ! I0127 12:27:14.855322       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:24.859182       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:24.859342       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:24.859757       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:24.859824       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:24.860078       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:24.860255       1 main.go:301] handling current node
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:34.854206       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:34.854462       1 main.go:301] handling current node
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:34.854567       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:34.854657       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:34.855188       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:34.855233       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:44.861342       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:44.861572       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:44.862224       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:44.862399       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:44.862648       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:44.862687       1 main.go:301] handling current node
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:54.853605       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:54.853658       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:54.853924       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:54.854125       1 main.go:301] handling current node
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:54.854203       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:54.854216       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:28:04.859858       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:28:04.859922       1 main.go:301] handling current node
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:28:04.859984       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:28:04.860038       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:28:04.860336       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:28:04.860450       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:28:14.853470       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:28:14.853607       1 main.go:301] handling current node
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:28:14.853627       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:28:14.853634       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:28:14.854800       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:28:14.854899       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.482067    9948 command_runner.go:130] ! I0127 12:28:24.853786       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.482067    9948 command_runner.go:130] ! I0127 12:28:24.853841       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.482067    9948 command_runner.go:130] ! I0127 12:28:24.854051       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.482067    9948 command_runner.go:130] ! I0127 12:28:24.854078       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.482067    9948 command_runner.go:130] ! I0127 12:28:24.854192       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.482067    9948 command_runner.go:130] ! I0127 12:28:24.854297       1 main.go:301] handling current node
	I0127 12:36:50.482067    9948 command_runner.go:130] ! I0127 12:28:34.853571       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.482067    9948 command_runner.go:130] ! I0127 12:28:34.853730       1 main.go:301] handling current node
	I0127 12:36:50.482232    9948 command_runner.go:130] ! I0127 12:28:34.853756       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.482232    9948 command_runner.go:130] ! I0127 12:28:34.853765       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.482232    9948 command_runner.go:130] ! I0127 12:28:34.853988       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.482232    9948 command_runner.go:130] ! I0127 12:28:34.854180       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.482232    9948 command_runner.go:130] ! I0127 12:28:44.853630       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.482314    9948 command_runner.go:130] ! I0127 12:28:44.854161       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.482314    9948 command_runner.go:130] ! I0127 12:28:44.854753       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.482314    9948 command_runner.go:130] ! I0127 12:28:44.854886       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.482314    9948 command_runner.go:130] ! I0127 12:28:44.855270       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.482371    9948 command_runner.go:130] ! I0127 12:28:44.855393       1 main.go:301] handling current node
	I0127 12:36:50.482371    9948 command_runner.go:130] ! I0127 12:28:54.856731       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.482371    9948 command_runner.go:130] ! I0127 12:28:54.856780       1 main.go:301] handling current node
	I0127 12:36:50.482587    9948 command_runner.go:130] ! I0127 12:28:54.856800       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.482650    9948 command_runner.go:130] ! I0127 12:28:54.856807       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.482650    9948 command_runner.go:130] ! I0127 12:28:54.857466       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.482693    9948 command_runner.go:130] ! I0127 12:28:54.857531       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.482693    9948 command_runner.go:130] ! I0127 12:29:04.853996       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.482746    9948 command_runner.go:130] ! I0127 12:29:04.854093       1 main.go:301] handling current node
	I0127 12:36:50.482746    9948 command_runner.go:130] ! I0127 12:29:04.854113       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.482787    9948 command_runner.go:130] ! I0127 12:29:04.854120       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.482787    9948 command_runner.go:130] ! I0127 12:29:04.854865       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.482787    9948 command_runner.go:130] ! I0127 12:29:04.855000       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.482839    9948 command_runner.go:130] ! I0127 12:29:14.853874       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.482839    9948 command_runner.go:130] ! I0127 12:29:14.854279       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.482895    9948 command_runner.go:130] ! I0127 12:29:14.854677       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.482895    9948 command_runner.go:130] ! I0127 12:29:14.854896       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.482895    9948 command_runner.go:130] ! I0127 12:29:14.855469       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.482941    9948 command_runner.go:130] ! I0127 12:29:14.856845       1 main.go:301] handling current node
	I0127 12:36:50.482941    9948 command_runner.go:130] ! I0127 12:29:24.853660       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.482941    9948 command_runner.go:130] ! I0127 12:29:24.853766       1 main.go:301] handling current node
	I0127 12:36:50.482995    9948 command_runner.go:130] ! I0127 12:29:24.853786       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.483070    9948 command_runner.go:130] ! I0127 12:29:24.853793       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.483070    9948 command_runner.go:130] ! I0127 12:29:24.854261       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.483070    9948 command_runner.go:130] ! I0127 12:29:24.854541       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.483106    9948 command_runner.go:130] ! I0127 12:29:34.861616       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.483106    9948 command_runner.go:130] ! I0127 12:29:34.861807       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.483106    9948 command_runner.go:130] ! I0127 12:29:34.862166       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.483153    9948 command_runner.go:130] ! I0127 12:29:34.862228       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.483153    9948 command_runner.go:130] ! I0127 12:29:34.862400       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.483190    9948 command_runner.go:130] ! I0127 12:29:34.862455       1 main.go:301] handling current node
	I0127 12:36:50.483190    9948 command_runner.go:130] ! I0127 12:29:44.854294       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.483190    9948 command_runner.go:130] ! I0127 12:29:44.854418       1 main.go:301] handling current node
	I0127 12:36:50.483190    9948 command_runner.go:130] ! I0127 12:29:44.854439       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.483237    9948 command_runner.go:130] ! I0127 12:29:44.854448       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.483237    9948 command_runner.go:130] ! I0127 12:29:44.854699       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.483237    9948 command_runner.go:130] ! I0127 12:29:44.854776       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.483272    9948 command_runner.go:130] ! I0127 12:29:54.853707       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.483272    9948 command_runner.go:130] ! I0127 12:29:54.853780       1 main.go:301] handling current node
	I0127 12:36:50.483272    9948 command_runner.go:130] ! I0127 12:29:54.853914       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.483314    9948 command_runner.go:130] ! I0127 12:29:54.854022       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.483314    9948 command_runner.go:130] ! I0127 12:29:54.854423       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:29:54.854566       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:04.853625       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:04.853820       1 main.go:301] handling current node
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:04.854002       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:04.854301       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:04.854878       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:04.854986       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:14.853537       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:14.853729       1 main.go:301] handling current node
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:14.853749       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:14.853756       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:14.855013       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:14.855147       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:24.853563       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:24.853757       1 main.go:301] handling current node
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:24.853779       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:24.853786       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:24.854220       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:24.854327       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:34.858899       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:34.859124       1 main.go:301] handling current node
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:34.859146       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:34.859676       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:34.860572       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:34.860819       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:44.858769       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:44.858890       1 main.go:301] handling current node
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:44.858912       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:44.858920       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:44.859720       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:44.859809       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:54.855090       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:54.855134       1 main.go:301] handling current node
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:54.855151       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:54.855157       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:54.855561       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:54.855573       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:31:04.854121       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:31:04.854237       1 main.go:301] handling current node
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:31:04.854256       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:31:04.854263       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:31:04.854424       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:31:04.854452       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:50.483905    9948 command_runner.go:130] ! I0127 12:31:04.854544       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.29.206.88 Flags: [] Table: 0 Realm: 0} 
	I0127 12:36:50.483905    9948 command_runner.go:130] ! I0127 12:31:14.853651       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.483905    9948 command_runner.go:130] ! I0127 12:31:14.853750       1 main.go:301] handling current node
	I0127 12:36:50.483990    9948 command_runner.go:130] ! I0127 12:31:14.853771       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.483990    9948 command_runner.go:130] ! I0127 12:31:14.853778       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.483990    9948 command_runner.go:130] ! I0127 12:31:14.854005       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:50.483990    9948 command_runner.go:130] ! I0127 12:31:14.854084       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:50.483990    9948 command_runner.go:130] ! I0127 12:31:24.854114       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.484051    9948 command_runner.go:130] ! I0127 12:31:24.854161       1 main.go:301] handling current node
	I0127 12:36:50.484087    9948 command_runner.go:130] ! I0127 12:31:24.854212       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:24.854223       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:24.854591       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:24.854666       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:34.862705       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:34.862793       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:34.863105       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:34.863140       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:34.863334       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:34.863362       1 main.go:301] handling current node
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:44.855275       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:44.855421       1 main.go:301] handling current node
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:44.855462       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:44.855496       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:44.856579       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:44.856690       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:54.856288       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:54.856579       1 main.go:301] handling current node
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:54.856914       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:54.857065       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:54.857508       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:54.857553       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:04.853556       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:04.853630       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:04.854583       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:04.854615       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:04.857114       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:04.857217       1 main.go:301] handling current node
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:14.854183       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:14.854348       1 main.go:301] handling current node
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:14.854376       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:14.854402       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:14.854890       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:14.854992       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:24.853770       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:24.854222       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:24.854498       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:24.854573       1 main.go:301] handling current node
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:24.854606       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:24.854613       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:34.853556       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:34.853715       1 main.go:301] handling current node
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:34.853749       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.484711    9948 command_runner.go:130] ! I0127 12:32:34.853879       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.484711    9948 command_runner.go:130] ! I0127 12:32:34.854386       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:50.484891    9948 command_runner.go:130] ! I0127 12:32:34.854469       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:50.484891    9948 command_runner.go:130] ! I0127 12:32:44.853378       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.484928    9948 command_runner.go:130] ! I0127 12:32:44.853424       1 main.go:301] handling current node
	I0127 12:36:50.484928    9948 command_runner.go:130] ! I0127 12:32:44.853441       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.484928    9948 command_runner.go:130] ! I0127 12:32:44.853447       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.484928    9948 command_runner.go:130] ! I0127 12:32:44.853735       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:50.484979    9948 command_runner.go:130] ! I0127 12:32:44.853765       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:50.485015    9948 command_runner.go:130] ! I0127 12:32:54.859317       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.485015    9948 command_runner.go:130] ! I0127 12:32:54.859396       1 main.go:301] handling current node
	I0127 12:36:50.485015    9948 command_runner.go:130] ! I0127 12:32:54.859415       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.485062    9948 command_runner.go:130] ! I0127 12:32:54.859421       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.485062    9948 command_runner.go:130] ! I0127 12:32:54.859756       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:50.485098    9948 command_runner.go:130] ! I0127 12:32:54.859853       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:50.485098    9948 command_runner.go:130] ! I0127 12:33:04.861975       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.485098    9948 command_runner.go:130] ! I0127 12:33:04.862085       1 main.go:301] handling current node
	I0127 12:36:50.485098    9948 command_runner.go:130] ! I0127 12:33:04.862106       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.485145    9948 command_runner.go:130] ! I0127 12:33:04.862113       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.485145    9948 command_runner.go:130] ! I0127 12:33:04.862780       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:50.485181    9948 command_runner.go:130] ! I0127 12:33:04.862861       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:50.485181    9948 command_runner.go:130] ! I0127 12:33:14.853823       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.485181    9948 command_runner.go:130] ! I0127 12:33:14.853859       1 main.go:301] handling current node
	I0127 12:36:50.485228    9948 command_runner.go:130] ! I0127 12:33:14.853877       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.485228    9948 command_runner.go:130] ! I0127 12:33:14.853884       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.485264    9948 command_runner.go:130] ! I0127 12:33:14.854153       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:50.485264    9948 command_runner.go:130] ! I0127 12:33:14.854165       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:50.501552    9948 logs.go:123] Gathering logs for container status ...
	I0127 12:36:50.501552    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:36:50.567788    9948 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0127 12:36:50.567917    9948 command_runner.go:130] > 528243cca8bfb       8c811b4aec35f                                                                                         3 seconds ago        Running             busybox                   1                   ef504f99724cb       busybox-58667487b6-2jq9j
	I0127 12:36:50.567917    9948 command_runner.go:130] > b3a9ed6e130c0       c69fa2e9cbf5f                                                                                         3 seconds ago        Running             coredns                   1                   6b22dbb5ef3e0       coredns-668d6bf9bc-2qw6w
	I0127 12:36:50.567917    9948 command_runner.go:130] > 389606c183b19       6e38f40d628db                                                                                         23 seconds ago       Running             storage-provisioner       2                   b613e9a7a3565       storage-provisioner
	I0127 12:36:50.568044    9948 command_runner.go:130] > 373bec67270fb       50415e5d05f05                                                                                         About a minute ago   Running             kindnet-cni               1                   d43e4cc62e087       kindnet-z2hqq
	I0127 12:36:50.568044    9948 command_runner.go:130] > 9b2db1d0cb61c       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   b613e9a7a3565       storage-provisioner
	I0127 12:36:50.568119    9948 command_runner.go:130] > 0283b35dee3cc       e29f9c7391fd9                                                                                         About a minute ago   Running             kube-proxy                1                   34d579bb511fe       kube-proxy-s46mv
	I0127 12:36:50.568155    9948 command_runner.go:130] > ea993630a3109       95c0bda56fc4d                                                                                         About a minute ago   Running             kube-apiserver            0                   5601285bb260a       kube-apiserver-multinode-659000
	I0127 12:36:50.568201    9948 command_runner.go:130] > 0ef2a3b50bae8       a9e7e6b294baf                                                                                         About a minute ago   Running             etcd                      0                   cdf534e99b2bb       etcd-multinode-659000
	I0127 12:36:50.568235    9948 command_runner.go:130] > ed51c7eaa9666       2b0d6572d062c                                                                                         About a minute ago   Running             kube-scheduler            1                   910315897d842       kube-scheduler-multinode-659000
	I0127 12:36:50.568235    9948 command_runner.go:130] > 8d4872cda28de       019ee182b58e2                                                                                         About a minute ago   Running             kube-controller-manager   1                   b770a357d9830       kube-controller-manager-multinode-659000
	I0127 12:36:50.568235    9948 command_runner.go:130] > 998a64b2baa2d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   4c82c0ec4aeaa       busybox-58667487b6-2jq9j
	I0127 12:36:50.568235    9948 command_runner.go:130] > f818dd15d8b02       c69fa2e9cbf5f                                                                                         24 minutes ago       Exited              coredns                   0                   4a53e133a1cd6       coredns-668d6bf9bc-2qw6w
	I0127 12:36:50.568235    9948 command_runner.go:130] > d758000dda95d       kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108              24 minutes ago       Exited              kindnet-cni               0                   f2d0bd65fe50d       kindnet-z2hqq
	I0127 12:36:50.568235    9948 command_runner.go:130] > bbec7ccef7da5       e29f9c7391fd9                                                                                         24 minutes ago       Exited              kube-proxy                0                   319cddeebceb6       kube-proxy-s46mv
	I0127 12:36:50.568235    9948 command_runner.go:130] > a16e06a038601       2b0d6572d062c                                                                                         24 minutes ago       Exited              kube-scheduler            0                   5423fc5113290       kube-scheduler-multinode-659000
	I0127 12:36:50.568235    9948 command_runner.go:130] > e07a66f8f6196       019ee182b58e2                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   1bd5bf99bede3       kube-controller-manager-multinode-659000
	I0127 12:36:50.570849    9948 logs.go:123] Gathering logs for kubelet ...
	I0127 12:36:50.570939    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:36:50.608551    9948 command_runner.go:130] > Jan 27 12:35:32 multinode-659000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0127 12:36:50.608689    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1507]: I0127 12:35:33.096330    1507 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0127 12:36:50.608689    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1507]: I0127 12:35:33.097069    1507 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:50.608876    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1507]: I0127 12:35:33.098504    1507 server.go:954] "Client rotation is on, will bootstrap in background"
	I0127 12:36:50.608915    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1507]: E0127 12:35:33.099084    1507 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0127 12:36:50.608949    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:50.609007    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0127 12:36:50.609041    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0127 12:36:50.609041    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1565]: I0127 12:35:33.855505    1565 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1565]: I0127 12:35:33.856023    1565 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1565]: I0127 12:35:33.856456    1565 server.go:954] "Client rotation is on, will bootstrap in background"
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1565]: E0127 12:35:33.856573    1565 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:34 multinode-659000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.167839    1648 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.168570    1648 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.169526    1648 server.go:954] "Client rotation is on, will bootstrap in background"
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.171330    1648 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.190537    1648 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.208219    1648 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.208354    1648 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.217489    1648 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.217603    1648 server.go:841] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.218319    1648 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.218396    1648 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-659000","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.218720    1648 topology_manager.go:138] "Creating topology manager with none policy"
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.218780    1648 container_manager_linux.go:304] "Creating device plugin manager"
	I0127 12:36:50.609671    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.219430    1648 state_mem.go:36] "Initialized new in-memory state store"
	I0127 12:36:50.609671    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.221396    1648 kubelet.go:446] "Attempting to sync node with API server"
	I0127 12:36:50.609736    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.221465    1648 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0127 12:36:50.609736    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.221524    1648 kubelet.go:352] "Adding apiserver pod source"
	I0127 12:36:50.609736    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.221568    1648 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0127 12:36:50.609873    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.230949    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-659000&limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:50.609910    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.231085    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-659000&limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:50.609910    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.232363    1648 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="docker" version="27.4.0" apiVersion="v1"
	I0127 12:36:50.609960    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.236967    1648 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0127 12:36:50.609996    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.237190    1648 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0127 12:36:50.609996    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.245589    1648 watchdog_linux.go:99] "Systemd watchdog is not enabled"
	I0127 12:36:50.610045    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.245760    1648 server.go:1287] "Started kubelet"
	I0127 12:36:50.610081    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.246317    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:50.610129    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.246411    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.246814    1648 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.247495    1648 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.249106    1648 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.260914    1648 server.go:169] "Starting to listen" address="0.0.0.0" port=10250
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.262947    1648 server.go:490] "Adding debug handlers to kubelet server"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.264052    1648 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.267083    1648 volume_manager.go:297] "Starting Kubelet Volume Manager"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.267485    1648 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"multinode-659000\" not found"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.270946    1648 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.29.198.106:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-659000.181e8cd12d2fa1af  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-659000,UID:multinode-659000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-659000,},FirstTimestamp:2025-01-27 12:35:36.245739951 +0000 UTC m=+0.150414507,LastTimestamp:2025-01-27 12:35:36.245739951 +0000 UTC m=+0.150414507,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-6
59000,}"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.275270    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-659000?timeout=10s\": dial tcp 172.29.198.106:8443: connect: connection refused" interval="200ms"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.275715    1648 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.280615    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.280911    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.282354    1648 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.282424    1648 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.282441    1648 factory.go:221] Registration of the systemd container factory successfully
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.345823    1648 reconciler.go:26] "Reconciler: start to sync state"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.348883    1648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.352701    1648 cpu_manager.go:221] "Starting CPU manager" policy="none"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.352736    1648 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.352866    1648 state_mem.go:36] "Initialized new in-memory state store"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353577    1648 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353729    1648 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353769    1648 policy_none.go:49] "None policy: Start"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353902    1648 memory_manager.go:186] "Starting memorymanager" policy="None"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353967    1648 state_mem.go:35] "Initializing new in-memory state store"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.354751    1648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.354791    1648 status_manager.go:227] "Starting to sync pod status with apiserver"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.354811    1648 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I0127 12:36:50.610744    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.354819    1648 kubelet.go:2388] "Starting kubelet main sync loop"
	I0127 12:36:50.610744    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.354862    1648 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0127 12:36:50.610807    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.355393    1648 state_mem.go:75] "Updated machine memory state"
	I0127 12:36:50.610807    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.358802    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:50.610807    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.358857    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:50.610914    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.371233    1648 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"multinode-659000\" not found"
	I0127 12:36:50.610951    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.373395    1648 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0127 12:36:50.611001    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.373786    1648 eviction_manager.go:189] "Eviction manager: starting control loop"
	I0127 12:36:50.611038    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.373887    1648 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0127 12:36:50.611078    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.380088    1648 iptables.go:577] "Could not set up iptables canary" err=<
	I0127 12:36:50.611078    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.380760    1648 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.380984    1648 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-659000\" not found"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.382902    1648 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.468172    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.468821    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c82c0ec4aeaa9b21462a8248326ae982d6f7a0aee31347f1a58d216f0335177"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.468934    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2d0bd65fe50d3b8a64acf8ee065aa49d1a51b768c5fe6fe9532d26fa35aa7b1"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.468988    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bd5bf99bede3e691e572fc4b8a37f4f42f8a9b2520adf8bc87bdf76e8258a4b"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.469050    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5423fc5113290b937df9b531c5fbd748c5d927fd5e170e8126b67bae6a814384"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.470252    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.475717    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.477090    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-659000?timeout=10s\": dial tcp 172.29.198.106:8443: connect: connection refused" interval="400ms"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.480196    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.198.106:8443: connect: connection refused" node="multinode-659000"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.487429    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc9ef8ee86ec2e354006c4c56f82fe9ec4df472096628ad620faba06fa0b1ff8"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.508448    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a53e133a1cd6ab9514cb15ac3c4f1d5683d17008b482cebb08bf4809e060709"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.523288    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="319cddeebceb6ec82b5865f1c67eaf88948a282ace1113869910f5bf8c717d83"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.545844    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b522c4c9f4c776ea35298b9eaf7c05d64bddd6f385e12252bdf6aada9a3e20d"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.566476    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e6c90fc43fa6c0754218ff1c4162045d-kubeconfig\") pod \"kube-scheduler-multinode-659000\" (UID: \"e6c90fc43fa6c0754218ff1c4162045d\") " pod="kube-system/kube-scheduler-multinode-659000"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.566534    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b9fbd177058ba298cde2a92c4ef5c601-k8s-certs\") pod \"kube-apiserver-multinode-659000\" (UID: \"b9fbd177058ba298cde2a92c4ef5c601\") " pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.566560    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-kubeconfig\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567472    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:50.611701    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567527    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/575cefa3aa8017dce576fa244e719a4e-etcd-certs\") pod \"etcd-multinode-659000\" (UID: \"575cefa3aa8017dce576fa244e719a4e\") " pod="kube-system/etcd-multinode-659000"
	I0127 12:36:50.611765    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567546    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/575cefa3aa8017dce576fa244e719a4e-etcd-data\") pod \"etcd-multinode-659000\" (UID: \"575cefa3aa8017dce576fa244e719a4e\") " pod="kube-system/etcd-multinode-659000"
	I0127 12:36:50.611765    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567563    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b9fbd177058ba298cde2a92c4ef5c601-ca-certs\") pod \"kube-apiserver-multinode-659000\" (UID: \"b9fbd177058ba298cde2a92c4ef5c601\") " pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:50.611885    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567580    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-ca-certs\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:50.611921    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567687    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-flexvolume-dir\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:50.611969    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567720    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-k8s-certs\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:50.612005    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567745    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b9fbd177058ba298cde2a92c4ef5c601-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-659000\" (UID: \"b9fbd177058ba298cde2a92c4ef5c601\") " pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:50.612054    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567166    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51ee4649b24aa281b3767c049c3c1d4063e516b98501648152da39ee45cb0b26"
	I0127 12:36:50.612089    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.569350    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.612138    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.570289    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.612138    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.681872    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:50.612174    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.682569    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.198.106:8443: connect: connection refused" node="multinode-659000"
	I0127 12:36:50.612222    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.878668    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-659000?timeout=10s\": dial tcp 172.29.198.106:8443: connect: connection refused" interval="800ms"
	I0127 12:36:50.612475    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: W0127 12:35:37.056372    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:50.612504    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.056534    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:50.612585    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: I0127 12:35:37.084276    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:50.612612    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.085344    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.198.106:8443: connect: connection refused" node="multinode-659000"
	I0127 12:36:50.612652    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: W0127 12:35:37.281985    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:50.612688    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.282078    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:50.612736    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: W0127 12:35:37.629266    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-659000&limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:50.612815    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.629409    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-659000&limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:50.612851    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: W0127 12:35:37.673700    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:50.612898    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.673876    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:50.612934    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.680515    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-659000?timeout=10s\": dial tcp 172.29.198.106:8443: connect: connection refused" interval="1.6s"
	I0127 12:36:50.612934    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: I0127 12:35:37.887498    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:50.612982    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.888458    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.198.106:8443: connect: connection refused" node="multinode-659000"
	I0127 12:36:50.613017    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: E0127 12:35:39.058364    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.613065    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: E0127 12:35:39.084210    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.613065    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: E0127 12:35:39.099659    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.613149    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: E0127 12:35:39.112572    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.613185    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: I0127 12:35:39.489967    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:50.613234    9948 command_runner.go:130] > Jan 27 12:35:40 multinode-659000 kubelet[1648]: E0127 12:35:40.123734    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.613269    9948 command_runner.go:130] > Jan 27 12:35:40 multinode-659000 kubelet[1648]: E0127 12:35:40.124212    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.613269    9948 command_runner.go:130] > Jan 27 12:35:40 multinode-659000 kubelet[1648]: E0127 12:35:40.124507    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.613315    9948 command_runner.go:130] > Jan 27 12:35:40 multinode-659000 kubelet[1648]: E0127 12:35:40.124790    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.613351    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.138584    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.613351    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.139346    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.613398    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.139719    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.613437    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.469180    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-659000"
	I0127 12:36:50.613486    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.513020    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-multinode-659000\" already exists" pod="kube-system/etcd-multinode-659000"
	I0127 12:36:50.613486    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.513064    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:50.613522    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.538800    1648 kubelet_node_status.go:125] "Node was previously registered" node="multinode-659000"
	I0127 12:36:50.613522    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.538905    1648 kubelet_node_status.go:79] "Successfully registered node" node="multinode-659000"
	I0127 12:36:50.613565    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.538949    1648 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0127 12:36:50.613601    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.539897    1648 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0127 12:36:50.613601    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.540655    1648 setters.go:602] "Node became not ready" node="multinode-659000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-27T12:35:41Z","lastTransitionTime":"2025-01-27T12:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0127 12:36:50.613683    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.555833    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-multinode-659000\" already exists" pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:50.613683    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.555924    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:50.613724    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.574323    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-multinode-659000\" already exists" pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:50.613760    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.574484    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-multinode-659000"
	I0127 12:36:50.613760    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.589698    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-multinode-659000\" already exists" pod="kube-system/kube-scheduler-multinode-659000"
	I0127 12:36:50.613807    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.247993    1648 apiserver.go:52] "Watching apiserver"
	I0127 12:36:50.613843    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.255092    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-659000" podUID="f19e9efc-57cc-4e2a-b365-920592a7f352"
	I0127 12:36:50.613843    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.257281    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.613891    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.257504    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.613926    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.261197    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/etcd-multinode-659000" podUID="d2a9c448-86a1-48e3-8b48-345c937e5bb4"
	I0127 12:36:50.613973    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.277187    1648 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0127 12:36:50.613973    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.304401    1648 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/etcd-multinode-659000"
	I0127 12:36:50.614008    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.304607    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-659000"
	I0127 12:36:50.614055    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.309849    1648 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:50.614090    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.309963    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:50.614090    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.343249    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae3b8daf-d674-4cfe-8652-cb5ff6ba8615-lib-modules\") pod \"kube-proxy-s46mv\" (UID: \"ae3b8daf-d674-4cfe-8652-cb5ff6ba8615\") " pod="kube-system/kube-proxy-s46mv"
	I0127 12:36:50.614133    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.343617    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9b617a9c-e2b8-45fd-bee2-45cb03d4cd42-cni-cfg\") pod \"kindnet-z2hqq\" (UID: \"9b617a9c-e2b8-45fd-bee2-45cb03d4cd42\") " pod="kube-system/kindnet-z2hqq"
	I0127 12:36:50.614170    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.343779    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b617a9c-e2b8-45fd-bee2-45cb03d4cd42-lib-modules\") pod \"kindnet-z2hqq\" (UID: \"9b617a9c-e2b8-45fd-bee2-45cb03d4cd42\") " pod="kube-system/kindnet-z2hqq"
	I0127 12:36:50.614271    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.343961    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae3b8daf-d674-4cfe-8652-cb5ff6ba8615-xtables-lock\") pod \"kube-proxy-s46mv\" (UID: \"ae3b8daf-d674-4cfe-8652-cb5ff6ba8615\") " pod="kube-system/kube-proxy-s46mv"
	I0127 12:36:50.614334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.344263    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b617a9c-e2b8-45fd-bee2-45cb03d4cd42-xtables-lock\") pod \"kindnet-z2hqq\" (UID: \"9b617a9c-e2b8-45fd-bee2-45cb03d4cd42\") " pod="kube-system/kindnet-z2hqq"
	I0127 12:36:50.614374    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.344443    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bcfd7913-1bc0-4c24-882f-2be92ec9b046-tmp\") pod \"storage-provisioner\" (UID: \"bcfd7913-1bc0-4c24-882f-2be92ec9b046\") " pod="kube-system/storage-provisioner"
	I0127 12:36:50.614409    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.345456    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:50.614481    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.345573    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:42.845554363 +0000 UTC m=+6.750229019 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:50.614519    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.362165    1648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bf31ca1befb4fb3e8f2fd27458a3b80" path="/var/lib/kubelet/pods/6bf31ca1befb4fb3e8f2fd27458a3b80/volumes"
	I0127 12:36:50.614519    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.363294    1648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7291ea72d8be6e47ed8b536906d73549" path="/var/lib/kubelet/pods/7291ea72d8be6e47ed8b536906d73549/volumes"
	I0127 12:36:50.614590    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.396667    1648 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	I0127 12:36:50.614590    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.400478    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.614633    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.400505    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.614737    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.400550    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:42.900534148 +0000 UTC m=+6.805208804 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.614874    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.494698    1648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-659000" podStartSLOduration=0.494540064 podStartE2EDuration="494.540064ms" podCreationTimestamp="2025-01-27 12:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 12:35:42.473709794 +0000 UTC m=+6.378384350" watchObservedRunningTime="2025-01-27 12:35:42.494540064 +0000 UTC m=+6.399214620"
	I0127 12:36:50.614934    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.494964    1648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-659000" podStartSLOduration=0.494955765 podStartE2EDuration="494.955765ms" podCreationTimestamp="2025-01-27 12:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 12:35:42.493805361 +0000 UTC m=+6.398480017" watchObservedRunningTime="2025-01-27 12:35:42.494955765 +0000 UTC m=+6.399630321"
	I0127 12:36:50.614976    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.849608    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:50.615030    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.849827    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:43.849803559 +0000 UTC m=+7.754478115 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:50.615030    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.951539    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.615085    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.951579    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.951637    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:43.951620201 +0000 UTC m=+7.856294757 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: I0127 12:35:43.230846    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b613e9a7a356580fd5381e358408317fd6120a119c23f3f196adda302e5ca97f"
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: I0127 12:35:43.240666    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34d579bb511fec290478f20b13002063b43c1a71bd6f2f45f1d83bbd8ac971ab"
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.588436    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: I0127 12:35:43.594121    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d43e4cc62e0877d4b65191623d58195cd33c60eff33c6e49e605f69620d5115f"
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: I0127 12:35:43.594816    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-659000" podUID="f19e9efc-57cc-4e2a-b365-920592a7f352"
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.861607    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.861754    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:45.861734662 +0000 UTC m=+9.766409318 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.962791    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.962845    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.963033    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:45.962955102 +0000 UTC m=+9.867629758 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:44 multinode-659000 kubelet[1648]: E0127 12:35:44.356390    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.355639    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.883867    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.883991    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:49.883972962 +0000 UTC m=+13.788647618 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:50.615786    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.984260    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.615786    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.984313    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.615786    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.984377    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:49.984359299 +0000 UTC m=+13.889033855 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.615786    9948 command_runner.go:130] > Jan 27 12:35:46 multinode-659000 kubelet[1648]: E0127 12:35:46.358731    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.615948    9948 command_runner.go:130] > Jan 27 12:35:46 multinode-659000 kubelet[1648]: E0127 12:35:46.386967    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:47 multinode-659000 kubelet[1648]: E0127 12:35:47.355582    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:48 multinode-659000 kubelet[1648]: E0127 12:35:48.356308    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:49 multinode-659000 kubelet[1648]: E0127 12:35:49.356027    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:49 multinode-659000 kubelet[1648]: E0127 12:35:49.925365    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:49 multinode-659000 kubelet[1648]: E0127 12:35:49.925459    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:57.925443152 +0000 UTC m=+21.830117808 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:50 multinode-659000 kubelet[1648]: E0127 12:35:50.027100    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:50 multinode-659000 kubelet[1648]: E0127 12:35:50.027219    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:50 multinode-659000 kubelet[1648]: E0127 12:35:50.027346    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:58.027289813 +0000 UTC m=+21.931964469 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:50 multinode-659000 kubelet[1648]: E0127 12:35:50.355319    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:51 multinode-659000 kubelet[1648]: E0127 12:35:51.356503    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:51 multinode-659000 kubelet[1648]: E0127 12:35:51.388594    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:52 multinode-659000 kubelet[1648]: E0127 12:35:52.357390    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:53 multinode-659000 kubelet[1648]: E0127 12:35:53.355568    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:54 multinode-659000 kubelet[1648]: E0127 12:35:54.355531    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:55 multinode-659000 kubelet[1648]: E0127 12:35:55.356228    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:56 multinode-659000 kubelet[1648]: E0127 12:35:56.355726    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:56 multinode-659000 kubelet[1648]: E0127 12:35:56.392446    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:57 multinode-659000 kubelet[1648]: E0127 12:35:57.355790    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.616565    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.001233    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:50.616565    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.001401    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:36:14.001383565 +0000 UTC m=+37.906058121 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:50.616565    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.101493    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.616565    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.101659    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.616758    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.101748    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:36:14.101732786 +0000 UTC m=+38.006407342 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.616758    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.365026    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.616758    9948 command_runner.go:130] > Jan 27 12:35:59 multinode-659000 kubelet[1648]: E0127 12:35:59.356031    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.616758    9948 command_runner.go:130] > Jan 27 12:36:00 multinode-659000 kubelet[1648]: E0127 12:36:00.356282    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.616758    9948 command_runner.go:130] > Jan 27 12:36:01 multinode-659000 kubelet[1648]: E0127 12:36:01.356209    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.616758    9948 command_runner.go:130] > Jan 27 12:36:01 multinode-659000 kubelet[1648]: E0127 12:36:01.394292    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:50.616758    9948 command_runner.go:130] > Jan 27 12:36:02 multinode-659000 kubelet[1648]: E0127 12:36:02.355777    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.616758    9948 command_runner.go:130] > Jan 27 12:36:03 multinode-659000 kubelet[1648]: E0127 12:36:03.356166    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.616758    9948 command_runner.go:130] > Jan 27 12:36:04 multinode-659000 kubelet[1648]: E0127 12:36:04.356089    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.616758    9948 command_runner.go:130] > Jan 27 12:36:05 multinode-659000 kubelet[1648]: E0127 12:36:05.355458    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.616758    9948 command_runner.go:130] > Jan 27 12:36:06 multinode-659000 kubelet[1648]: E0127 12:36:06.356120    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.616758    9948 command_runner.go:130] > Jan 27 12:36:06 multinode-659000 kubelet[1648]: E0127 12:36:06.396811    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:50.616758    9948 command_runner.go:130] > Jan 27 12:36:07 multinode-659000 kubelet[1648]: E0127 12:36:07.355573    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.616758    9948 command_runner.go:130] > Jan 27 12:36:08 multinode-659000 kubelet[1648]: E0127 12:36:08.355837    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.617339    9948 command_runner.go:130] > Jan 27 12:36:09 multinode-659000 kubelet[1648]: E0127 12:36:09.355284    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.617339    9948 command_runner.go:130] > Jan 27 12:36:10 multinode-659000 kubelet[1648]: E0127 12:36:10.356199    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.617339    9948 command_runner.go:130] > Jan 27 12:36:11 multinode-659000 kubelet[1648]: E0127 12:36:11.356023    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.617339    9948 command_runner.go:130] > Jan 27 12:36:11 multinode-659000 kubelet[1648]: E0127 12:36:11.398054    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:50.617507    9948 command_runner.go:130] > Jan 27 12:36:12 multinode-659000 kubelet[1648]: E0127 12:36:12.355492    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.617539    9948 command_runner.go:130] > Jan 27 12:36:13 multinode-659000 kubelet[1648]: E0127 12:36:13.356291    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.617588    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.058689    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.058911    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:36:46.058858304 +0000 UTC m=+69.963532860 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.159091    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.159277    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.159495    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:36:46.15947175 +0000 UTC m=+70.064146406 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.357000    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:15 multinode-659000 kubelet[1648]: I0127 12:36:15.031682    1648 scope.go:117] "RemoveContainer" containerID="134620caeeb93fda5b32a71962e13d1994830a35b93b18ad2387296500dff7b5"
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:15 multinode-659000 kubelet[1648]: I0127 12:36:15.032024    1648 scope.go:117] "RemoveContainer" containerID="9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f"
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:15 multinode-659000 kubelet[1648]: E0127 12:36:15.032236    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bcfd7913-1bc0-4c24-882f-2be92ec9b046)\"" pod="kube-system/storage-provisioner" podUID="bcfd7913-1bc0-4c24-882f-2be92ec9b046"
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:15 multinode-659000 kubelet[1648]: E0127 12:36:15.355738    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:16 multinode-659000 kubelet[1648]: E0127 12:36:16.356191    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:16 multinode-659000 kubelet[1648]: E0127 12:36:16.399212    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:17 multinode-659000 kubelet[1648]: E0127 12:36:17.355082    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:18 multinode-659000 kubelet[1648]: E0127 12:36:18.356067    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:19 multinode-659000 kubelet[1648]: E0127 12:36:19.355675    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:20 multinode-659000 kubelet[1648]: E0127 12:36:20.356455    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:21 multinode-659000 kubelet[1648]: E0127 12:36:21.355971    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:21 multinode-659000 kubelet[1648]: E0127 12:36:21.401078    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:22 multinode-659000 kubelet[1648]: E0127 12:36:22.355954    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:23 multinode-659000 kubelet[1648]: E0127 12:36:23.355387    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.618208    9948 command_runner.go:130] > Jan 27 12:36:24 multinode-659000 kubelet[1648]: E0127 12:36:24.355437    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.618208    9948 command_runner.go:130] > Jan 27 12:36:25 multinode-659000 kubelet[1648]: E0127 12:36:25.356289    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.618208    9948 command_runner.go:130] > Jan 27 12:36:26 multinode-659000 kubelet[1648]: E0127 12:36:26.356493    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.618208    9948 command_runner.go:130] > Jan 27 12:36:26 multinode-659000 kubelet[1648]: E0127 12:36:26.402364    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:50.618401    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 kubelet[1648]: E0127 12:36:27.356407    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.618401    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 kubelet[1648]: I0127 12:36:27.357050    1648 scope.go:117] "RemoveContainer" containerID="9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f"
	I0127 12:36:50.618451    9948 command_runner.go:130] > Jan 27 12:36:28 multinode-659000 kubelet[1648]: E0127 12:36:28.356371    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.618481    9948 command_runner.go:130] > Jan 27 12:36:29 multinode-659000 kubelet[1648]: E0127 12:36:29.355555    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.618529    9948 command_runner.go:130] > Jan 27 12:36:30 multinode-659000 kubelet[1648]: E0127 12:36:30.356227    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.618560    9948 command_runner.go:130] > Jan 27 12:36:31 multinode-659000 kubelet[1648]: E0127 12:36:31.356043    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.618607    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]: I0127 12:36:36.363314    1648 scope.go:117] "RemoveContainer" containerID="5f274e5a8851d2aeb5403952c3fba0274fe53614e2e0995d1046693d7e725d5d"
	I0127 12:36:50.618652    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]: E0127 12:36:36.393311    1648 iptables.go:577] "Could not set up iptables canary" err=<
	I0127 12:36:50.618652    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0127 12:36:50.618699    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0127 12:36:50.618728    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0127 12:36:50.618728    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0127 12:36:50.618728    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]: I0127 12:36:36.409087    1648 scope.go:117] "RemoveContainer" containerID="f91e9c2d3ba64a6d34c9bab7c1953b46f4006e0bb493bd1ae993c489cd76e02c"
	I0127 12:36:50.663168    9948 logs.go:123] Gathering logs for dmesg ...
	I0127 12:36:50.663168    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:36:50.687151    9948 command_runner.go:130] > [Jan27 12:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0127 12:36:50.687151    9948 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0127 12:36:50.687151    9948 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0127 12:36:50.687151    9948 command_runner.go:130] > [  +0.124628] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0127 12:36:50.687151    9948 command_runner.go:130] > [  +0.022511] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0127 12:36:50.687347    9948 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0127 12:36:50.687361    9948 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0127 12:36:50.687424    9948 command_runner.go:130] > [  +0.069272] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0127 12:36:50.687424    9948 command_runner.go:130] > [  +0.020914] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0127 12:36:50.687464    9948 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0127 12:36:50.687464    9948 command_runner.go:130] > [Jan27 12:34] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0127 12:36:50.687464    9948 command_runner.go:130] > [  +0.706235] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0127 12:36:50.687464    9948 command_runner.go:130] > [  +1.791193] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0127 12:36:50.687464    9948 command_runner.go:130] > [  +6.780102] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0127 12:36:50.687561    9948 command_runner.go:130] > [  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0127 12:36:50.687561    9948 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0127 12:36:50.687561    9948 command_runner.go:130] > [Jan27 12:35] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	I0127 12:36:50.687561    9948 command_runner.go:130] > [  +0.194598] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	I0127 12:36:50.687561    9948 command_runner.go:130] > [ +25.881577] systemd-fstab-generator[1029]: Ignoring "noauto" option for root device
	I0127 12:36:50.687655    9948 command_runner.go:130] > [  +0.104839] kauditd_printk_skb: 75 callbacks suppressed
	I0127 12:36:50.687655    9948 command_runner.go:130] > [  +0.497850] systemd-fstab-generator[1069]: Ignoring "noauto" option for root device
	I0127 12:36:50.687655    9948 command_runner.go:130] > [  +0.189754] systemd-fstab-generator[1081]: Ignoring "noauto" option for root device
	I0127 12:36:50.687655    9948 command_runner.go:130] > [  +0.209865] systemd-fstab-generator[1095]: Ignoring "noauto" option for root device
	I0127 12:36:50.687655    9948 command_runner.go:130] > [  +2.995294] systemd-fstab-generator[1337]: Ignoring "noauto" option for root device
	I0127 12:36:50.687655    9948 command_runner.go:130] > [  +0.193187] systemd-fstab-generator[1349]: Ignoring "noauto" option for root device
	I0127 12:36:50.687727    9948 command_runner.go:130] > [  +0.167597] systemd-fstab-generator[1361]: Ignoring "noauto" option for root device
	I0127 12:36:50.687727    9948 command_runner.go:130] > [  +0.247752] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	I0127 12:36:50.687727    9948 command_runner.go:130] > [  +0.858687] systemd-fstab-generator[1500]: Ignoring "noauto" option for root device
	I0127 12:36:50.687727    9948 command_runner.go:130] > [  +0.090112] kauditd_printk_skb: 206 callbacks suppressed
	I0127 12:36:50.687727    9948 command_runner.go:130] > [  +3.380441] systemd-fstab-generator[1641]: Ignoring "noauto" option for root device
	I0127 12:36:50.687727    9948 command_runner.go:130] > [  +1.786352] kauditd_printk_skb: 64 callbacks suppressed
	I0127 12:36:50.687727    9948 command_runner.go:130] > [  +5.236723] kauditd_printk_skb: 10 callbacks suppressed
	I0127 12:36:50.687802    9948 command_runner.go:130] > [  +4.105586] systemd-fstab-generator[2522]: Ignoring "noauto" option for root device
	I0127 12:36:50.687802    9948 command_runner.go:130] > [Jan27 12:36] kauditd_printk_skb: 70 callbacks suppressed
	I0127 12:36:50.689662    9948 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:36:50.689662    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 12:36:50.977343    9948 command_runner.go:130] > Name:               multinode-659000
	I0127 12:36:50.977343    9948 command_runner.go:130] > Roles:              control-plane
	I0127 12:36:50.977343    9948 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0127 12:36:50.977343    9948 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0127 12:36:50.977343    9948 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0127 12:36:50.977343    9948 command_runner.go:130] >                     kubernetes.io/hostname=multinode-659000
	I0127 12:36:50.977343    9948 command_runner.go:130] >                     kubernetes.io/os=linux
	I0127 12:36:50.977343    9948 command_runner.go:130] >                     minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	I0127 12:36:50.977343    9948 command_runner.go:130] >                     minikube.k8s.io/name=multinode-659000
	I0127 12:36:50.977343    9948 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0127 12:36:50.977504    9948 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_01_27T12_12_00_0700
	I0127 12:36:50.977534    9948 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0127 12:36:50.977534    9948 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0127 12:36:50.977534    9948 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0127 12:36:50.977534    9948 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0127 12:36:50.977534    9948 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0127 12:36:50.977626    9948 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0127 12:36:50.977645    9948 command_runner.go:130] > CreationTimestamp:  Mon, 27 Jan 2025 12:11:55 +0000
	I0127 12:36:50.977645    9948 command_runner.go:130] > Taints:             <none>
	I0127 12:36:50.977645    9948 command_runner.go:130] > Unschedulable:      false
	I0127 12:36:50.977645    9948 command_runner.go:130] > Lease:
	I0127 12:36:50.977645    9948 command_runner.go:130] >   HolderIdentity:  multinode-659000
	I0127 12:36:50.977645    9948 command_runner.go:130] >   AcquireTime:     <unset>
	I0127 12:36:50.977645    9948 command_runner.go:130] >   RenewTime:       Mon, 27 Jan 2025 12:36:42 +0000
	I0127 12:36:50.977703    9948 command_runner.go:130] > Conditions:
	I0127 12:36:50.977788    9948 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0127 12:36:50.977788    9948 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0127 12:36:50.977788    9948 command_runner.go:130] >   MemoryPressure   False   Mon, 27 Jan 2025 12:36:32 +0000   Mon, 27 Jan 2025 12:11:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0127 12:36:50.977788    9948 command_runner.go:130] >   DiskPressure     False   Mon, 27 Jan 2025 12:36:32 +0000   Mon, 27 Jan 2025 12:11:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0127 12:36:50.977788    9948 command_runner.go:130] >   PIDPressure      False   Mon, 27 Jan 2025 12:36:32 +0000   Mon, 27 Jan 2025 12:11:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0127 12:36:50.977788    9948 command_runner.go:130] >   Ready            True    Mon, 27 Jan 2025 12:36:32 +0000   Mon, 27 Jan 2025 12:36:32 +0000   KubeletReady                 kubelet is posting ready status
	I0127 12:36:50.977788    9948 command_runner.go:130] > Addresses:
	I0127 12:36:50.977788    9948 command_runner.go:130] >   InternalIP:  172.29.198.106
	I0127 12:36:50.977788    9948 command_runner.go:130] >   Hostname:    multinode-659000
	I0127 12:36:50.977788    9948 command_runner.go:130] > Capacity:
	I0127 12:36:50.977788    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:50.977788    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:50.977788    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:50.977788    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:50.977788    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:50.977788    9948 command_runner.go:130] > Allocatable:
	I0127 12:36:50.977788    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:50.978341    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:50.978341    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:50.978341    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:50.978341    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:50.978341    9948 command_runner.go:130] > System Info:
	I0127 12:36:50.978341    9948 command_runner.go:130] >   Machine ID:                 312902fc96b948148d51eecf097c4a9d
	I0127 12:36:50.978341    9948 command_runner.go:130] >   System UUID:                be6234aa-9e29-bb41-8165-59b265a4d7d0
	I0127 12:36:50.978341    9948 command_runner.go:130] >   Boot ID:                    058425a5-0652-4c5c-a517-2369b8cac13d
	I0127 12:36:50.978453    9948 command_runner.go:130] >   Kernel Version:             5.10.207
	I0127 12:36:50.978453    9948 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0127 12:36:50.978491    9948 command_runner.go:130] >   Operating System:           linux
	I0127 12:36:50.978491    9948 command_runner.go:130] >   Architecture:               amd64
	I0127 12:36:50.978491    9948 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0127 12:36:50.978542    9948 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0127 12:36:50.978542    9948 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0127 12:36:50.978570    9948 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0127 12:36:50.978600    9948 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0127 12:36:50.978600    9948 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0127 12:36:50.978600    9948 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0127 12:36:50.978639    9948 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0127 12:36:50.978639    9948 command_runner.go:130] >   default                     busybox-58667487b6-2jq9j                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0127 12:36:50.978683    9948 command_runner.go:130] >   kube-system                 coredns-668d6bf9bc-2qw6w                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     24m
	I0127 12:36:50.978683    9948 command_runner.go:130] >   kube-system                 etcd-multinode-659000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         68s
	I0127 12:36:50.978683    9948 command_runner.go:130] >   kube-system                 kindnet-z2hqq                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	I0127 12:36:50.978760    9948 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-659000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         68s
	I0127 12:36:50.978788    9948 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-659000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0127 12:36:50.978788    9948 command_runner.go:130] >   kube-system                 kube-proxy-s46mv                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0127 12:36:50.978871    9948 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-659000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0127 12:36:50.978871    9948 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0127 12:36:50.978871    9948 command_runner.go:130] > Allocated resources:
	I0127 12:36:50.978871    9948 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0127 12:36:50.978871    9948 command_runner.go:130] >   Resource           Requests     Limits
	I0127 12:36:50.978871    9948 command_runner.go:130] >   --------           --------     ------
	I0127 12:36:50.978871    9948 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0127 12:36:50.978931    9948 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0127 12:36:50.978931    9948 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0127 12:36:50.978998    9948 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0127 12:36:50.979037    9948 command_runner.go:130] > Events:
	I0127 12:36:50.979072    9948 command_runner.go:130] >   Type     Reason                   Age                From             Message
	I0127 12:36:50.979105    9948 command_runner.go:130] >   ----     ------                   ----               ----             -------
	I0127 12:36:50.979105    9948 command_runner.go:130] >   Normal   Starting                 24m                kube-proxy       
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   Starting                 65s                kube-proxy       
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   Starting                 24m                kubelet          Starting kubelet.
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node multinode-659000 status is now: NodeHasSufficientMemory
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node multinode-659000 status is now: NodeHasNoDiskPressure
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node multinode-659000 status is now: NodeHasSufficientPID
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    24m                kubelet          Node multinode-659000 status is now: NodeHasNoDiskPressure
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   NodeHasSufficientMemory  24m                kubelet          Node multinode-659000 status is now: NodeHasSufficientMemory
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   NodeHasSufficientPID     24m                kubelet          Node multinode-659000 status is now: NodeHasSufficientPID
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   Starting                 24m                kubelet          Starting kubelet.
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   RegisteredNode           24m                node-controller  Node multinode-659000 event: Registered Node multinode-659000 in Controller
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   NodeReady                24m                kubelet          Node multinode-659000 status is now: NodeReady
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   Starting                 74s                kubelet          Starting kubelet.
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   NodeHasSufficientMemory  74s (x8 over 74s)  kubelet          Node multinode-659000 status is now: NodeHasSufficientMemory
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    74s (x8 over 74s)  kubelet          Node multinode-659000 status is now: NodeHasNoDiskPressure
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   NodeHasSufficientPID     74s (x7 over 74s)  kubelet          Node multinode-659000 status is now: NodeHasSufficientPID
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Warning  Rebooted                 69s                kubelet          Node multinode-659000 has been rebooted, boot id: 058425a5-0652-4c5c-a517-2369b8cac13d
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   RegisteredNode           66s                node-controller  Node multinode-659000 event: Registered Node multinode-659000 in Controller
	I0127 12:36:50.979132    9948 command_runner.go:130] > Name:               multinode-659000-m02
	I0127 12:36:50.979132    9948 command_runner.go:130] > Roles:              <none>
	I0127 12:36:50.979132    9948 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0127 12:36:50.979132    9948 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0127 12:36:50.979132    9948 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0127 12:36:50.979132    9948 command_runner.go:130] >                     kubernetes.io/hostname=multinode-659000-m02
	I0127 12:36:50.979132    9948 command_runner.go:130] >                     kubernetes.io/os=linux
	I0127 12:36:50.979132    9948 command_runner.go:130] >                     minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	I0127 12:36:50.979132    9948 command_runner.go:130] >                     minikube.k8s.io/name=multinode-659000
	I0127 12:36:50.979132    9948 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0127 12:36:50.979132    9948 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_01_27T12_15_08_0700
	I0127 12:36:50.979132    9948 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0127 12:36:50.979657    9948 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0127 12:36:50.979657    9948 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0127 12:36:50.979657    9948 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0127 12:36:50.979713    9948 command_runner.go:130] > CreationTimestamp:  Mon, 27 Jan 2025 12:15:07 +0000
	I0127 12:36:50.979713    9948 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0127 12:36:50.979713    9948 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0127 12:36:50.979713    9948 command_runner.go:130] > Unschedulable:      false
	I0127 12:36:50.979713    9948 command_runner.go:130] > Lease:
	I0127 12:36:50.979713    9948 command_runner.go:130] >   HolderIdentity:  multinode-659000-m02
	I0127 12:36:50.979713    9948 command_runner.go:130] >   AcquireTime:     <unset>
	I0127 12:36:50.979713    9948 command_runner.go:130] >   RenewTime:       Mon, 27 Jan 2025 12:32:39 +0000
	I0127 12:36:50.979814    9948 command_runner.go:130] > Conditions:
	I0127 12:36:50.979814    9948 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0127 12:36:50.979814    9948 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0127 12:36:50.979814    9948 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 27 Jan 2025 12:28:32 +0000   Mon, 27 Jan 2025 12:36:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:50.979874    9948 command_runner.go:130] >   DiskPressure     Unknown   Mon, 27 Jan 2025 12:28:32 +0000   Mon, 27 Jan 2025 12:36:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:50.979897    9948 command_runner.go:130] >   PIDPressure      Unknown   Mon, 27 Jan 2025 12:28:32 +0000   Mon, 27 Jan 2025 12:36:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Ready            Unknown   Mon, 27 Jan 2025 12:28:32 +0000   Mon, 27 Jan 2025 12:36:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:50.979918    9948 command_runner.go:130] > Addresses:
	I0127 12:36:50.979918    9948 command_runner.go:130] >   InternalIP:  172.29.199.129
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Hostname:    multinode-659000-m02
	I0127 12:36:50.979918    9948 command_runner.go:130] > Capacity:
	I0127 12:36:50.979918    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:50.979918    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:50.979918    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:50.979918    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:50.979918    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:50.979918    9948 command_runner.go:130] > Allocatable:
	I0127 12:36:50.979918    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:50.979918    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:50.979918    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:50.979918    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:50.979918    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:50.979918    9948 command_runner.go:130] > System Info:
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Machine ID:                 30ce15ff72904b54b07c49f3e2f28802
	I0127 12:36:50.979918    9948 command_runner.go:130] >   System UUID:                b6923799-fa1e-b54c-9340-50dd6a2378f5
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Boot ID:                    3308d183-ec79-4aeb-9d90-80d47cdbff63
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Kernel Version:             5.10.207
	I0127 12:36:50.979918    9948 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Operating System:           linux
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Architecture:               amd64
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0127 12:36:50.979918    9948 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0127 12:36:50.979918    9948 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0127 12:36:50.979918    9948 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0127 12:36:50.979918    9948 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0127 12:36:50.979918    9948 command_runner.go:130] >   default                     busybox-58667487b6-ktfxc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0127 12:36:50.979918    9948 command_runner.go:130] >   kube-system                 kindnet-n7vjl               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	I0127 12:36:50.979918    9948 command_runner.go:130] >   kube-system                 kube-proxy-pjhc8            0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0127 12:36:50.979918    9948 command_runner.go:130] > Allocated resources:
	I0127 12:36:50.979918    9948 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Resource           Requests   Limits
	I0127 12:36:50.979918    9948 command_runner.go:130] >   --------           --------   ------
	I0127 12:36:50.979918    9948 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0127 12:36:50.979918    9948 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0127 12:36:50.979918    9948 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0127 12:36:50.979918    9948 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0127 12:36:50.979918    9948 command_runner.go:130] > Events:
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0127 12:36:50.979918    9948 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-659000-m02 status is now: NodeHasSufficientMemory
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-659000-m02 status is now: NodeHasNoDiskPressure
	I0127 12:36:50.980451    9948 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-659000-m02 status is now: NodeHasSufficientPID
	I0127 12:36:50.980451    9948 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:50.980451    9948 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-659000-m02 event: Registered Node multinode-659000-m02 in Controller
	I0127 12:36:50.980610    9948 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-659000-m02 status is now: NodeReady
	I0127 12:36:50.980610    9948 command_runner.go:130] >   Normal  RegisteredNode           66s                node-controller  Node multinode-659000-m02 event: Registered Node multinode-659000-m02 in Controller
	I0127 12:36:50.980610    9948 command_runner.go:130] >   Normal  NodeNotReady             16s                node-controller  Node multinode-659000-m02 status is now: NodeNotReady
	I0127 12:36:50.980610    9948 command_runner.go:130] > Name:               multinode-659000-m03
	I0127 12:36:50.980610    9948 command_runner.go:130] > Roles:              <none>
	I0127 12:36:50.980610    9948 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0127 12:36:50.980610    9948 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0127 12:36:50.980610    9948 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0127 12:36:50.980610    9948 command_runner.go:130] >                     kubernetes.io/hostname=multinode-659000-m03
	I0127 12:36:50.980610    9948 command_runner.go:130] >                     kubernetes.io/os=linux
	I0127 12:36:50.980610    9948 command_runner.go:130] >                     minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	I0127 12:36:50.980610    9948 command_runner.go:130] >                     minikube.k8s.io/name=multinode-659000
	I0127 12:36:50.980610    9948 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0127 12:36:50.980610    9948 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_01_27T12_31_04_0700
	I0127 12:36:50.980610    9948 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0127 12:36:50.980610    9948 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0127 12:36:50.980610    9948 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0127 12:36:50.980610    9948 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0127 12:36:50.980610    9948 command_runner.go:130] > CreationTimestamp:  Mon, 27 Jan 2025 12:31:04 +0000
	I0127 12:36:50.980610    9948 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0127 12:36:50.980610    9948 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0127 12:36:50.980610    9948 command_runner.go:130] > Unschedulable:      false
	I0127 12:36:50.980610    9948 command_runner.go:130] > Lease:
	I0127 12:36:50.980610    9948 command_runner.go:130] >   HolderIdentity:  multinode-659000-m03
	I0127 12:36:50.980610    9948 command_runner.go:130] >   AcquireTime:     <unset>
	I0127 12:36:50.980610    9948 command_runner.go:130] >   RenewTime:       Mon, 27 Jan 2025 12:32:15 +0000
	I0127 12:36:50.980610    9948 command_runner.go:130] > Conditions:
	I0127 12:36:50.980610    9948 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0127 12:36:50.980610    9948 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0127 12:36:50.980610    9948 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 27 Jan 2025 12:31:22 +0000   Mon, 27 Jan 2025 12:33:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:50.980610    9948 command_runner.go:130] >   DiskPressure     Unknown   Mon, 27 Jan 2025 12:31:22 +0000   Mon, 27 Jan 2025 12:33:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:50.980610    9948 command_runner.go:130] >   PIDPressure      Unknown   Mon, 27 Jan 2025 12:31:22 +0000   Mon, 27 Jan 2025 12:33:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:50.980610    9948 command_runner.go:130] >   Ready            Unknown   Mon, 27 Jan 2025 12:31:22 +0000   Mon, 27 Jan 2025 12:33:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:50.980610    9948 command_runner.go:130] > Addresses:
	I0127 12:36:50.980610    9948 command_runner.go:130] >   InternalIP:  172.29.206.88
	I0127 12:36:50.980610    9948 command_runner.go:130] >   Hostname:    multinode-659000-m03
	I0127 12:36:50.980610    9948 command_runner.go:130] > Capacity:
	I0127 12:36:50.980610    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:50.980610    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:50.980610    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:50.980610    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:50.981195    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:50.981195    9948 command_runner.go:130] > Allocatable:
	I0127 12:36:50.981195    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:50.981195    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:50.981255    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:50.981255    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:50.981255    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:50.981255    9948 command_runner.go:130] > System Info:
	I0127 12:36:50.981255    9948 command_runner.go:130] >   Machine ID:                 5cd7b7bdbad940e0831e949f70fdd5af
	I0127 12:36:50.981255    9948 command_runner.go:130] >   System UUID:                bab0a90b-9ed8-ba42-88b9-fc6568ad7a53
	I0127 12:36:50.981255    9948 command_runner.go:130] >   Boot ID:                    9d0d04c8-71ef-487a-a13c-e1de6463b3fe
	I0127 12:36:50.981255    9948 command_runner.go:130] >   Kernel Version:             5.10.207
	I0127 12:36:50.981347    9948 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0127 12:36:50.981347    9948 command_runner.go:130] >   Operating System:           linux
	I0127 12:36:50.981347    9948 command_runner.go:130] >   Architecture:               amd64
	I0127 12:36:50.981347    9948 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0127 12:36:50.981347    9948 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0127 12:36:50.981347    9948 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0127 12:36:50.981347    9948 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0127 12:36:50.981407    9948 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0127 12:36:50.981407    9948 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0127 12:36:50.981407    9948 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0127 12:36:50.981407    9948 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0127 12:36:50.981493    9948 command_runner.go:130] >   kube-system                 kindnet-kpfjt       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0127 12:36:50.981551    9948 command_runner.go:130] >   kube-system                 kube-proxy-sk5js    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0127 12:36:50.981573    9948 command_runner.go:130] > Allocated resources:
	I0127 12:36:50.981573    9948 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0127 12:36:50.981573    9948 command_runner.go:130] >   Resource           Requests   Limits
	I0127 12:36:50.981573    9948 command_runner.go:130] >   --------           --------   ------
	I0127 12:36:50.981573    9948 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0127 12:36:50.981573    9948 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0127 12:36:50.981573    9948 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0127 12:36:50.981573    9948 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0127 12:36:50.981648    9948 command_runner.go:130] > Events:
	I0127 12:36:50.981648    9948 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0127 12:36:50.981648    9948 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0127 12:36:50.981648    9948 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0127 12:36:50.981648    9948 command_runner.go:130] >   Normal  Starting                 5m43s                  kube-proxy       
	I0127 12:36:50.981708    9948 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-659000-m03 status is now: NodeHasSufficientMemory
	I0127 12:36:50.981708    9948 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-659000-m03 status is now: NodeHasSufficientPID
	I0127 12:36:50.981708    9948 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:50.981801    9948 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-659000-m03 status is now: NodeHasNoDiskPressure
	I0127 12:36:50.981801    9948 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-659000-m03 status is now: NodeReady
	I0127 12:36:50.981801    9948 command_runner.go:130] >   Normal  Starting                 5m47s                  kubelet          Starting kubelet.
	I0127 12:36:50.981801    9948 command_runner.go:130] >   Normal  CIDRAssignmentFailed     5m46s                  cidrAllocator    Node multinode-659000-m03 status is now: CIDRAssignmentFailed
	I0127 12:36:50.981801    9948 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m46s (x2 over 5m46s)  kubelet          Node multinode-659000-m03 status is now: NodeHasSufficientMemory
	I0127 12:36:50.981801    9948 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m46s (x2 over 5m46s)  kubelet          Node multinode-659000-m03 status is now: NodeHasNoDiskPressure
	I0127 12:36:50.981801    9948 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m46s (x2 over 5m46s)  kubelet          Node multinode-659000-m03 status is now: NodeHasSufficientPID
	I0127 12:36:50.981988    9948 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m46s                  kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:50.981988    9948 command_runner.go:130] >   Normal  RegisteredNode           5m42s                  node-controller  Node multinode-659000-m03 event: Registered Node multinode-659000-m03 in Controller
	I0127 12:36:50.981988    9948 command_runner.go:130] >   Normal  NodeReady                5m28s                  kubelet          Node multinode-659000-m03 status is now: NodeReady
	I0127 12:36:50.981988    9948 command_runner.go:130] >   Normal  NodeNotReady             3m42s                  node-controller  Node multinode-659000-m03 status is now: NodeNotReady
	I0127 12:36:50.982054    9948 command_runner.go:130] >   Normal  RegisteredNode           66s                    node-controller  Node multinode-659000-m03 event: Registered Node multinode-659000-m03 in Controller
	I0127 12:36:50.991768    9948 logs.go:123] Gathering logs for coredns [f818dd15d8b0] ...
	I0127 12:36:50.991768    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f818dd15d8b0"
	I0127 12:36:51.024368    9948 command_runner.go:130] > .:53
	I0127 12:36:51.024368    9948 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 5e2e325279dfa828a8fd1b44d83ab4703abb0247d4beadde42157147650fe687c0862eaa4caa15a5d9139c48c9a9dd5ec3cd962ba60368e8ffb4d02ae4d29aeb
	I0127 12:36:51.024426    9948 command_runner.go:130] > CoreDNS-1.11.3
	I0127 12:36:51.024426    9948 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0127 12:36:51.024426    9948 command_runner.go:130] > [INFO] 127.0.0.1:50782 - 35950 "HINFO IN 8787717511470146079.8254135695837817311. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.151481959s
	I0127 12:36:51.024426    9948 command_runner.go:130] > [INFO] 10.244.0.3:56186 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000430505s
	I0127 12:36:51.024480    9948 command_runner.go:130] > [INFO] 10.244.0.3:58756 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.126738988s
	I0127 12:36:51.024504    9948 command_runner.go:130] > [INFO] 10.244.0.3:36399 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.053330342s
	I0127 12:36:51.024504    9948 command_runner.go:130] > [INFO] 10.244.0.3:35359 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.140941591s
	I0127 12:36:51.024504    9948 command_runner.go:130] > [INFO] 10.244.1.2:41150 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000220803s
	I0127 12:36:51.024565    9948 command_runner.go:130] > [INFO] 10.244.1.2:57591 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0000709s
	I0127 12:36:51.024565    9948 command_runner.go:130] > [INFO] 10.244.1.2:45132 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000133002s
	I0127 12:36:51.024565    9948 command_runner.go:130] > [INFO] 10.244.1.2:48593 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000728s
	I0127 12:36:51.024565    9948 command_runner.go:130] > [INFO] 10.244.0.3:53274 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000261802s
	I0127 12:36:51.024641    9948 command_runner.go:130] > [INFO] 10.244.0.3:57676 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.069110701s
	I0127 12:36:51.024641    9948 command_runner.go:130] > [INFO] 10.244.0.3:59948 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000178302s
	I0127 12:36:51.024668    9948 command_runner.go:130] > [INFO] 10.244.0.3:39801 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000198802s
	I0127 12:36:51.024710    9948 command_runner.go:130] > [INFO] 10.244.0.3:45673 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.023238636s
	I0127 12:36:51.024730    9948 command_runner.go:130] > [INFO] 10.244.0.3:42840 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154002s
	I0127 12:36:51.024730    9948 command_runner.go:130] > [INFO] 10.244.0.3:43505 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000181002s
	I0127 12:36:51.024730    9948 command_runner.go:130] > [INFO] 10.244.0.3:34935 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092101s
	I0127 12:36:51.024821    9948 command_runner.go:130] > [INFO] 10.244.1.2:54822 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155102s
	I0127 12:36:51.024821    9948 command_runner.go:130] > [INFO] 10.244.1.2:50877 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000188102s
	I0127 12:36:51.024846    9948 command_runner.go:130] > [INFO] 10.244.1.2:45384 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183802s
	I0127 12:36:51.024881    9948 command_runner.go:130] > [INFO] 10.244.1.2:35073 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000227202s
	I0127 12:36:51.024881    9948 command_runner.go:130] > [INFO] 10.244.1.2:50517 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000061101s
	I0127 12:36:51.024881    9948 command_runner.go:130] > [INFO] 10.244.1.2:37353 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130501s
	I0127 12:36:51.024933    9948 command_runner.go:130] > [INFO] 10.244.1.2:42117 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114301s
	I0127 12:36:51.024954    9948 command_runner.go:130] > [INFO] 10.244.1.2:46171 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060401s
	I0127 12:36:51.024979    9948 command_runner.go:130] > [INFO] 10.244.0.3:55282 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117601s
	I0127 12:36:51.024979    9948 command_runner.go:130] > [INFO] 10.244.0.3:41761 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162301s
	I0127 12:36:51.025011    9948 command_runner.go:130] > [INFO] 10.244.0.3:35358 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000218902s
	I0127 12:36:51.025011    9948 command_runner.go:130] > [INFO] 10.244.0.3:50342 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124402s
	I0127 12:36:51.025045    9948 command_runner.go:130] > [INFO] 10.244.1.2:38159 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159602s
	I0127 12:36:51.025045    9948 command_runner.go:130] > [INFO] 10.244.1.2:37043 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000171002s
	I0127 12:36:51.025078    9948 command_runner.go:130] > [INFO] 10.244.1.2:50762 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000168301s
	I0127 12:36:51.025078    9948 command_runner.go:130] > [INFO] 10.244.1.2:33014 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000603s
	I0127 12:36:51.025129    9948 command_runner.go:130] > [INFO] 10.244.0.3:34941 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134301s
	I0127 12:36:51.025129    9948 command_runner.go:130] > [INFO] 10.244.0.3:60117 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000393904s
	I0127 12:36:51.025174    9948 command_runner.go:130] > [INFO] 10.244.0.3:47506 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000214402s
	I0127 12:36:51.025174    9948 command_runner.go:130] > [INFO] 10.244.0.3:42968 - 5 "PTR IN 1.192.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000443604s
	I0127 12:36:51.025174    9948 command_runner.go:130] > [INFO] 10.244.1.2:52260 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193802s
	I0127 12:36:51.025174    9948 command_runner.go:130] > [INFO] 10.244.1.2:40492 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000310903s
	I0127 12:36:51.025174    9948 command_runner.go:130] > [INFO] 10.244.1.2:50341 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074s
	I0127 12:36:51.025174    9948 command_runner.go:130] > [INFO] 10.244.1.2:41676 - 5 "PTR IN 1.192.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000637s
	I0127 12:36:51.025174    9948 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0127 12:36:51.025174    9948 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0127 12:36:51.027582    9948 logs.go:123] Gathering logs for kube-proxy [0283b35dee3c] ...
	I0127 12:36:51.027582    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0283b35dee3c"
	I0127 12:36:51.061534    9948 command_runner.go:130] ! I0127 12:35:44.449716       1 server_linux.go:66] "Using iptables proxy"
	I0127 12:36:51.061534    9948 command_runner.go:130] ! E0127 12:35:44.569403       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0127 12:36:51.061534    9948 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0127 12:36:51.061637    9948 command_runner.go:130] ! 	add table ip kube-proxy
	I0127 12:36:51.061637    9948 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:51.061637    9948 command_runner.go:130] !  >
	I0127 12:36:51.061637    9948 command_runner.go:130] ! E0127 12:35:44.599245       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0127 12:36:51.061637    9948 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0127 12:36:51.061637    9948 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0127 12:36:51.061637    9948 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:51.061637    9948 command_runner.go:130] !  >
	I0127 12:36:51.061702    9948 command_runner.go:130] ! I0127 12:35:44.767652       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.198.106"]
	I0127 12:36:51.061736    9948 command_runner.go:130] ! E0127 12:35:44.770299       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 12:36:51.061773    9948 command_runner.go:130] ! I0127 12:35:45.038438       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 12:36:51.061773    9948 command_runner.go:130] ! I0127 12:35:45.038556       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 12:36:51.061864    9948 command_runner.go:130] ! I0127 12:35:45.038587       1 server_linux.go:170] "Using iptables Proxier"
	I0127 12:36:51.061864    9948 command_runner.go:130] ! I0127 12:35:45.043111       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 12:36:51.061864    9948 command_runner.go:130] ! I0127 12:35:45.045042       1 server.go:497] "Version info" version="v1.32.1"
	I0127 12:36:51.061966    9948 command_runner.go:130] ! I0127 12:35:45.045375       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:51.061995    9948 command_runner.go:130] ! I0127 12:35:45.053262       1 config.go:199] "Starting service config controller"
	I0127 12:36:51.061995    9948 command_runner.go:130] ! I0127 12:35:45.054808       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 12:36:51.061995    9948 command_runner.go:130] ! I0127 12:35:45.054873       1 config.go:329] "Starting node config controller"
	I0127 12:36:51.061995    9948 command_runner.go:130] ! I0127 12:35:45.054880       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 12:36:51.062237    9948 command_runner.go:130] ! I0127 12:35:45.058308       1 config.go:105] "Starting endpoint slice config controller"
	I0127 12:36:51.062290    9948 command_runner.go:130] ! I0127 12:35:45.058492       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 12:36:51.062317    9948 command_runner.go:130] ! I0127 12:35:45.155116       1 shared_informer.go:320] Caches are synced for node config
	I0127 12:36:51.062317    9948 command_runner.go:130] ! I0127 12:35:45.155116       1 shared_informer.go:320] Caches are synced for service config
	I0127 12:36:51.062317    9948 command_runner.go:130] ! I0127 12:35:45.159566       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 12:36:51.065663    9948 logs.go:123] Gathering logs for kube-controller-manager [8d4872cda28d] ...
	I0127 12:36:51.065663    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4872cda28d"
	I0127 12:36:51.100308    9948 command_runner.go:130] ! I0127 12:35:39.384985       1 serving.go:386] Generated self-signed cert in-memory
	I0127 12:36:51.101314    9948 command_runner.go:130] ! I0127 12:35:39.805936       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0127 12:36:51.101314    9948 command_runner.go:130] ! I0127 12:35:39.811206       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:51.101314    9948 command_runner.go:130] ! I0127 12:35:39.817632       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0127 12:36:51.101399    9948 command_runner.go:130] ! I0127 12:35:39.822579       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:51.101399    9948 command_runner.go:130] ! I0127 12:35:39.822772       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 12:36:51.101399    9948 command_runner.go:130] ! I0127 12:35:39.823033       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:51.101399    9948 command_runner.go:130] ! I0127 12:35:43.406116       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0127 12:36:51.101399    9948 command_runner.go:130] ! I0127 12:35:43.407249       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0127 12:36:51.101462    9948 command_runner.go:130] ! I0127 12:35:43.417237       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0127 12:36:51.101487    9948 command_runner.go:130] ! I0127 12:35:43.417292       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0127 12:36:51.101487    9948 command_runner.go:130] ! I0127 12:35:43.417300       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0127 12:36:51.101487    9948 command_runner.go:130] ! I0127 12:35:43.417307       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0127 12:36:51.101487    9948 command_runner.go:130] ! I0127 12:35:43.417506       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0127 12:36:51.101487    9948 command_runner.go:130] ! I0127 12:35:43.417534       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0127 12:36:51.101487    9948 command_runner.go:130] ! I0127 12:35:43.417553       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0127 12:36:51.101487    9948 command_runner.go:130] ! I0127 12:35:43.431621       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0127 12:36:51.101593    9948 command_runner.go:130] ! I0127 12:35:43.431964       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0127 12:36:51.101593    9948 command_runner.go:130] ! I0127 12:35:43.431989       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0127 12:36:51.101664    9948 command_runner.go:130] ! I0127 12:35:43.432010       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0127 12:36:51.101664    9948 command_runner.go:130] ! I0127 12:35:43.442961       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0127 12:36:51.101711    9948 command_runner.go:130] ! I0127 12:35:43.447308       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0127 12:36:51.101737    9948 command_runner.go:130] ! I0127 12:35:43.447396       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0127 12:36:51.101767    9948 command_runner.go:130] ! I0127 12:35:43.449412       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.449608       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.466583       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.467490       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.467508       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.491988       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.493672       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.493698       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.498557       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.503953       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.503976       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.505729       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.505861       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.505872       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.509718       1 shared_informer.go:320] Caches are synced for tokens
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.510192       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.510208       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.510698       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.510714       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.512896       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.513433       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.513448       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.516433       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.516659       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.516671       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.524334       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.524358       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.524545       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0127 12:36:51.102333    9948 command_runner.go:130] ! I0127 12:35:43.524557       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0127 12:36:51.102333    9948 command_runner.go:130] ! I0127 12:35:43.534871       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0127 12:36:51.102333    9948 command_runner.go:130] ! I0127 12:35:43.535028       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0127 12:36:51.102333    9948 command_runner.go:130] ! I0127 12:35:43.535038       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0127 12:36:51.102333    9948 command_runner.go:130] ! I0127 12:35:43.557745       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0127 12:36:51.102333    9948 command_runner.go:130] ! I0127 12:35:43.557975       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0127 12:36:51.102333    9948 command_runner.go:130] ! I0127 12:35:43.612615       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0127 12:36:51.102514    9948 command_runner.go:130] ! I0127 12:35:43.612890       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0127 12:36:51.102539    9948 command_runner.go:130] ! I0127 12:35:43.612906       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0127 12:36:51.102539    9948 command_runner.go:130] ! I0127 12:35:43.616333       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0127 12:36:51.102566    9948 command_runner.go:130] ! I0127 12:35:43.627087       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.627107       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.692864       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.692892       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.693095       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.700796       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.703832       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.703867       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.713912       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.714114       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.714094       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.714712       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.714721       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.721904       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.722372       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.723076       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.739709       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.739886       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.739897       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.748074       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.748419       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.748432       1 shared_informer.go:313] Waiting for caches to sync for job
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.774085       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.774108       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.774196       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.814844       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0127 12:36:51.103170    9948 command_runner.go:130] ! I0127 12:35:43.815383       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0127 12:36:51.103170    9948 command_runner.go:130] ! I0127 12:35:43.815410       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0127 12:36:51.103230    9948 command_runner.go:130] ! W0127 12:35:43.815432       1 shared_informer.go:597] resyncPeriod 17h46m45.188948257s is smaller than resyncCheckPeriod 20h1m58.14772951s and the informer has already started. Changing it to 20h1m58.14772951s
	I0127 12:36:51.103230    9948 command_runner.go:130] ! I0127 12:35:43.815487       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0127 12:36:51.103230    9948 command_runner.go:130] ! I0127 12:35:43.815503       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0127 12:36:51.103322    9948 command_runner.go:130] ! I0127 12:35:43.816077       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0127 12:36:51.103348    9948 command_runner.go:130] ! I0127 12:35:43.816613       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0127 12:36:51.103348    9948 command_runner.go:130] ! I0127 12:35:43.817053       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0127 12:36:51.103348    9948 command_runner.go:130] ! I0127 12:35:43.817252       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0127 12:36:51.103414    9948 command_runner.go:130] ! I0127 12:35:43.817373       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0127 12:36:51.103414    9948 command_runner.go:130] ! I0127 12:35:43.817397       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0127 12:36:51.103414    9948 command_runner.go:130] ! W0127 12:35:43.818105       1 shared_informer.go:597] resyncPeriod 12h27m56.377400464s is smaller than resyncCheckPeriod 20h1m58.14772951s and the informer has already started. Changing it to 20h1m58.14772951s
	I0127 12:36:51.103475    9948 command_runner.go:130] ! I0127 12:35:43.818223       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0127 12:36:51.103475    9948 command_runner.go:130] ! I0127 12:35:43.818270       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0127 12:36:51.103475    9948 command_runner.go:130] ! I0127 12:35:43.818295       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0127 12:36:51.103555    9948 command_runner.go:130] ! I0127 12:35:43.818319       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.818336       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.818363       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.818376       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.818392       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.818410       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.818442       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.818764       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.818778       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.819843       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.841955       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.842559       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.842587       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.842995       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.852026       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.852211       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.852253       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.922876       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.923019       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.923033       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.962858       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.962895       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.963021       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.963037       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:44.014798       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:44.016438       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:44.016458       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:44.066881       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:44.067018       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:44.067064       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0127 12:36:51.103582    9948 command_runner.go:130] ! W0127 12:35:44.227808       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0127 12:36:51.104119    9948 command_runner.go:130] ! I0127 12:35:44.236233       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0127 12:36:51.104162    9948 command_runner.go:130] ! I0127 12:35:44.236429       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0127 12:36:51.104162    9948 command_runner.go:130] ! I0127 12:35:44.236541       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0127 12:36:51.104162    9948 command_runner.go:130] ! I0127 12:35:44.236556       1 shared_informer.go:313] Waiting for caches to sync for node
	I0127 12:36:51.104162    9948 command_runner.go:130] ! I0127 12:35:44.261051       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0127 12:36:51.104162    9948 command_runner.go:130] ! I0127 12:35:44.261341       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0127 12:36:51.104162    9948 command_runner.go:130] ! I0127 12:35:44.261374       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0127 12:36:51.104162    9948 command_runner.go:130] ! I0127 12:35:44.314220       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0127 12:36:51.104311    9948 command_runner.go:130] ! I0127 12:35:44.314319       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0127 12:36:51.104311    9948 command_runner.go:130] ! I0127 12:35:44.314352       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0127 12:36:51.104347    9948 command_runner.go:130] ! I0127 12:35:44.364392       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0127 12:36:51.104347    9948 command_runner.go:130] ! I0127 12:35:44.364625       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0127 12:36:51.104347    9948 command_runner.go:130] ! I0127 12:35:44.365833       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0127 12:36:51.104347    9948 command_runner.go:130] ! I0127 12:35:44.365937       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0127 12:36:51.104347    9948 command_runner.go:130] ! I0127 12:35:44.365975       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:51.104430    9948 command_runner.go:130] ! I0127 12:35:44.365977       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:51.104430    9948 command_runner.go:130] ! I0127 12:35:44.367697       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0127 12:36:51.104465    9948 command_runner.go:130] ! I0127 12:35:44.368067       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0127 12:36:51.104465    9948 command_runner.go:130] ! I0127 12:35:44.368427       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:51.104465    9948 command_runner.go:130] ! I0127 12:35:44.369763       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0127 12:36:51.104556    9948 command_runner.go:130] ! I0127 12:35:44.370290       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0127 12:36:51.104556    9948 command_runner.go:130] ! I0127 12:35:44.370408       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0127 12:36:51.104556    9948 command_runner.go:130] ! I0127 12:35:44.370568       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:51.104556    9948 command_runner.go:130] ! I0127 12:35:44.412258       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0127 12:36:51.104626    9948 command_runner.go:130] ! I0127 12:35:44.412274       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0127 12:36:51.104626    9948 command_runner.go:130] ! I0127 12:35:44.412282       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0127 12:36:51.104701    9948 command_runner.go:130] ! I0127 12:35:44.412297       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0127 12:36:51.104701    9948 command_runner.go:130] ! I0127 12:35:44.412368       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0127 12:36:51.104701    9948 command_runner.go:130] ! I0127 12:35:44.412379       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0127 12:36:51.104701    9948 command_runner.go:130] ! I0127 12:35:44.517568       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0127 12:36:51.104701    9948 command_runner.go:130] ! I0127 12:35:44.517771       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0127 12:36:51.104701    9948 command_runner.go:130] ! I0127 12:35:44.518074       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0127 12:36:51.104701    9948 command_runner.go:130] ! I0127 12:35:44.518288       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0127 12:36:51.104801    9948 command_runner.go:130] ! I0127 12:35:44.564449       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0127 12:36:51.104801    9948 command_runner.go:130] ! I0127 12:35:44.564546       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0127 12:36:51.104801    9948 command_runner.go:130] ! I0127 12:35:44.564657       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0127 12:36:51.104801    9948 command_runner.go:130] ! I0127 12:35:44.591265       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0127 12:36:51.104801    9948 command_runner.go:130] ! I0127 12:35:44.663628       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.727283       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.739370       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000\" does not exist"
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.739797       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m02\" does not exist"
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.740184       1 shared_informer.go:320] Caches are synced for crt configmap
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.740835       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m03\" does not exist"
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.747985       1 shared_informer.go:320] Caches are synced for GC
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.748593       1 shared_informer.go:320] Caches are synced for job
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.765439       1 shared_informer.go:320] Caches are synced for cronjob
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.765669       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.765982       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.766264       1 shared_informer.go:320] Caches are synced for expand
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.766617       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.767305       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.767462       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.768217       1 shared_informer.go:320] Caches are synced for stateful set
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.766681       1 shared_informer.go:320] Caches are synced for daemon sets
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.774887       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.775167       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.775269       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.775418       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.778028       1 shared_informer.go:320] Caches are synced for HPA
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.793610       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.793916       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.798773       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.805302       1 shared_informer.go:320] Caches are synced for PVC protection
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.805404       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.806234       1 shared_informer.go:320] Caches are synced for PV protection
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.811621       1 shared_informer.go:320] Caches are synced for ephemeral
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.813099       1 shared_informer.go:320] Caches are synced for TTL
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.813420       1 shared_informer.go:320] Caches are synced for namespace
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.813655       1 shared_informer.go:320] Caches are synced for deployment
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.815238       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.819201       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.819433       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.820006       1 shared_informer.go:320] Caches are synced for disruption
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.821695       1 shared_informer.go:320] Caches are synced for taint
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.821905       1 shared_informer.go:320] Caches are synced for endpoint
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.824479       1 shared_informer.go:320] Caches are synced for persistent volume
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.824852       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0127 12:36:51.105456    9948 command_runner.go:130] ! I0127 12:35:44.825228       1 shared_informer.go:320] Caches are synced for attach detach
	I0127 12:36:51.105456    9948 command_runner.go:130] ! I0127 12:35:44.825784       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0127 12:36:51.105456    9948 command_runner.go:130] ! I0127 12:35:44.836209       1 shared_informer.go:320] Caches are synced for service account
	I0127 12:36:51.105456    9948 command_runner.go:130] ! I0127 12:35:44.836651       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0127 12:36:51.105531    9948 command_runner.go:130] ! I0127 12:35:44.836969       1 shared_informer.go:320] Caches are synced for node
	I0127 12:36:51.105531    9948 command_runner.go:130] ! I0127 12:35:44.838015       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0127 12:36:51.105531    9948 command_runner.go:130] ! I0127 12:35:44.838049       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0127 12:36:51.105531    9948 command_runner.go:130] ! I0127 12:35:44.838058       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0127 12:36:51.105531    9948 command_runner.go:130] ! I0127 12:35:44.838065       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0127 12:36:51.105619    9948 command_runner.go:130] ! I0127 12:35:44.838200       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.105643    9948 command_runner.go:130] ! I0127 12:35:44.838217       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.105675    9948 command_runner.go:130] ! I0127 12:35:44.838227       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.105675    9948 command_runner.go:130] ! I0127 12:35:44.844908       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:36:51.105711    9948 command_runner.go:130] ! I0127 12:35:44.845551       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0127 12:36:51.105711    9948 command_runner.go:130] ! I0127 12:35:44.845777       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0127 12:36:51.105747    9948 command_runner.go:130] ! I0127 12:35:44.898551       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.105747    9948 command_runner.go:130] ! I0127 12:35:44.899476       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.105747    9948 command_runner.go:130] ! I0127 12:35:44.900201       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000"
	I0127 12:36:51.105805    9948 command_runner.go:130] ! I0127 12:35:44.900496       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000-m02"
	I0127 12:36:51.105805    9948 command_runner.go:130] ! I0127 12:35:44.900687       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000-m03"
	I0127 12:36:51.105877    9948 command_runner.go:130] ! I0127 12:35:44.901405       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0127 12:36:51.105877    9948 command_runner.go:130] ! I0127 12:35:44.984858       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.105920    9948 command_runner.go:130] ! I0127 12:35:45.000632       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="180.930208ms"
	I0127 12:36:51.105920    9948 command_runner.go:130] ! I0127 12:35:45.003909       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="39.2µs"
	I0127 12:36:51.105920    9948 command_runner.go:130] ! I0127 12:35:45.016382       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="195.414857ms"
	I0127 12:36:51.105986    9948 command_runner.go:130] ! I0127 12:35:45.016698       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="108.2µs"
	I0127 12:36:51.105986    9948 command_runner.go:130] ! I0127 12:35:54.975850       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.105986    9948 command_runner.go:130] ! I0127 12:36:32.834093       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:51.106046    9948 command_runner.go:130] ! I0127 12:36:32.834425       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.106046    9948 command_runner.go:130] ! I0127 12:36:32.855708       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.106093    9948 command_runner.go:130] ! I0127 12:36:34.928482       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.106118    9948 command_runner.go:130] ! I0127 12:36:34.940809       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.106149    9948 command_runner.go:130] ! I0127 12:36:34.955742       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.106176    9948 command_runner.go:130] ! I0127 12:36:35.025877       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="15.32946ms"
	I0127 12:36:51.106176    9948 command_runner.go:130] ! I0127 12:36:35.026020       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="30.3µs"
	I0127 12:36:51.106176    9948 command_runner.go:130] ! I0127 12:36:40.041357       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.106176    9948 command_runner.go:130] ! I0127 12:36:47.580904       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="50.8µs"
	I0127 12:36:51.106176    9948 command_runner.go:130] ! I0127 12:36:48.616631       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="19.328909ms"
	I0127 12:36:51.106176    9948 command_runner.go:130] ! I0127 12:36:48.617909       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="35.8µs"
	I0127 12:36:51.106176    9948 command_runner.go:130] ! I0127 12:36:48.650691       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="23.414753ms"
	I0127 12:36:51.106176    9948 command_runner.go:130] ! I0127 12:36:48.651163       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="28.701µs"
	I0127 12:36:51.123876    9948 logs.go:123] Gathering logs for kube-controller-manager [e07a66f8f619] ...
	I0127 12:36:51.123876    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e07a66f8f619"
	I0127 12:36:51.168168    9948 command_runner.go:130] ! I0127 12:11:53.668834       1 serving.go:386] Generated self-signed cert in-memory
	I0127 12:36:51.168267    9948 command_runner.go:130] ! I0127 12:11:53.986868       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0127 12:36:51.168267    9948 command_runner.go:130] ! I0127 12:11:53.987309       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:51.168267    9948 command_runner.go:130] ! I0127 12:11:53.989401       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0127 12:36:51.168267    9948 command_runner.go:130] ! I0127 12:11:53.990012       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:51.168267    9948 command_runner.go:130] ! I0127 12:11:53.990187       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 12:36:51.168267    9948 command_runner.go:130] ! I0127 12:11:53.990322       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:51.168267    9948 command_runner.go:130] ! I0127 12:11:58.581695       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0127 12:36:51.168267    9948 command_runner.go:130] ! I0127 12:11:58.581741       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0127 12:36:51.168267    9948 command_runner.go:130] ! I0127 12:11:58.615284       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0127 12:36:51.168267    9948 command_runner.go:130] ! I0127 12:11:58.615497       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0127 12:36:51.168436    9948 command_runner.go:130] ! I0127 12:11:58.615545       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0127 12:36:51.168436    9948 command_runner.go:130] ! I0127 12:11:58.626456       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0127 12:36:51.168436    9948 command_runner.go:130] ! I0127 12:11:58.626896       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0127 12:36:51.168436    9948 command_runner.go:130] ! I0127 12:11:58.626952       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0127 12:36:51.168515    9948 command_runner.go:130] ! I0127 12:11:58.636784       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0127 12:36:51.168515    9948 command_runner.go:130] ! I0127 12:11:58.636866       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0127 12:36:51.168578    9948 command_runner.go:130] ! I0127 12:11:58.637077       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0127 12:36:51.168633    9948 command_runner.go:130] ! I0127 12:11:58.637108       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0127 12:36:51.168633    9948 command_runner.go:130] ! I0127 12:11:58.649619       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0127 12:36:51.168667    9948 command_runner.go:130] ! I0127 12:11:58.649750       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0127 12:36:51.168690    9948 command_runner.go:130] ! I0127 12:11:58.649765       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0127 12:36:51.168690    9948 command_runner.go:130] ! I0127 12:11:58.650223       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0127 12:36:51.168690    9948 command_runner.go:130] ! I0127 12:11:58.650457       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0127 12:36:51.168747    9948 command_runner.go:130] ! I0127 12:11:58.682646       1 shared_informer.go:320] Caches are synced for tokens
	I0127 12:36:51.168747    9948 command_runner.go:130] ! I0127 12:11:58.684061       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0127 12:36:51.168747    9948 command_runner.go:130] ! I0127 12:11:58.684098       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0127 12:36:51.168812    9948 command_runner.go:130] ! I0127 12:11:58.698781       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.699001       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.699050       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.699060       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.720187       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.720450       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.725202       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.736652       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.737667       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.738017       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.758863       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.759137       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.759589       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.759751       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.778737       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.779301       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.794263       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.805098       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.805155       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.805917       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.889766       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.889864       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.889880       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:59.169736       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:59.169792       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:59.169804       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:59.292507       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:59.292665       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:59.292680       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:59.451231       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:59.451328       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:59.451387       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:59.451649       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:59.594702       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:59.594829       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0127 12:36:51.169378    9948 command_runner.go:130] ! I0127 12:11:59.595498       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0127 12:36:51.169378    9948 command_runner.go:130] ! I0127 12:11:59.595889       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0127 12:36:51.169378    9948 command_runner.go:130] ! I0127 12:11:59.744969       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0127 12:36:51.169378    9948 command_runner.go:130] ! I0127 12:11:59.745617       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0127 12:36:51.169378    9948 command_runner.go:130] ! I0127 12:11:59.745871       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0127 12:36:51.169473    9948 command_runner.go:130] ! I0127 12:11:59.892444       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0127 12:36:51.169473    9948 command_runner.go:130] ! I0127 12:11:59.892907       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0127 12:36:51.169473    9948 command_runner.go:130] ! I0127 12:11:59.893093       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0127 12:36:51.169473    9948 command_runner.go:130] ! I0127 12:12:00.136328       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0127 12:36:51.169473    9948 command_runner.go:130] ! I0127 12:12:00.136634       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0127 12:36:51.169547    9948 command_runner.go:130] ! I0127 12:12:00.136654       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0127 12:36:51.169547    9948 command_runner.go:130] ! I0127 12:12:00.136681       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0127 12:36:51.169547    9948 command_runner.go:130] ! I0127 12:12:00.425858       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0127 12:36:51.169547    9948 command_runner.go:130] ! I0127 12:12:00.426027       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0127 12:36:51.169613    9948 command_runner.go:130] ! I0127 12:12:00.426047       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0127 12:36:51.169642    9948 command_runner.go:130] ! I0127 12:12:00.426160       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0127 12:36:51.169642    9948 command_runner.go:130] ! I0127 12:12:00.426327       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0127 12:36:51.169642    9948 command_runner.go:130] ! I0127 12:12:00.426356       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0127 12:36:51.169642    9948 command_runner.go:130] ! I0127 12:12:00.685414       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0127 12:36:51.169708    9948 command_runner.go:130] ! I0127 12:12:00.685471       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0127 12:36:51.169708    9948 command_runner.go:130] ! I0127 12:12:00.685482       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0127 12:36:51.169708    9948 command_runner.go:130] ! I0127 12:12:00.841490       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0127 12:36:51.169708    9948 command_runner.go:130] ! I0127 12:12:00.841888       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0127 12:36:51.169708    9948 command_runner.go:130] ! I0127 12:12:00.841953       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0127 12:36:51.169708    9948 command_runner.go:130] ! I0127 12:12:00.888027       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0127 12:36:51.169815    9948 command_runner.go:130] ! I0127 12:12:00.888135       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0127 12:36:51.169815    9948 command_runner.go:130] ! I0127 12:12:00.888174       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:51.169815    9948 command_runner.go:130] ! I0127 12:12:00.889767       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0127 12:36:51.169883    9948 command_runner.go:130] ! I0127 12:12:00.889893       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0127 12:36:51.169883    9948 command_runner.go:130] ! I0127 12:12:00.889957       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:00.890020       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:00.890047       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:00.890072       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:00.890079       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:00.890101       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:00.890256       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:00.890391       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.042988       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.043513       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.043602       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.043761       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0127 12:36:51.169935    9948 command_runner.go:130] ! W0127 12:12:01.189051       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.192613       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.192663       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.193062       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.193147       1 shared_informer.go:313] Waiting for caches to sync for node
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.493812       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.493885       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.493919       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.494208       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.494371       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.494391       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.494413       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.494456       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.494473       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.494487       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0127 12:36:51.170470    9948 command_runner.go:130] ! I0127 12:12:01.494531       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0127 12:36:51.170470    9948 command_runner.go:130] ! I0127 12:12:01.494547       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0127 12:36:51.170470    9948 command_runner.go:130] ! I0127 12:12:01.494617       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0127 12:36:51.170470    9948 command_runner.go:130] ! I0127 12:12:01.494687       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0127 12:36:51.170576    9948 command_runner.go:130] ! I0127 12:12:01.494717       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0127 12:36:51.170576    9948 command_runner.go:130] ! I0127 12:12:01.494749       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0127 12:36:51.170647    9948 command_runner.go:130] ! I0127 12:12:01.494763       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0127 12:36:51.170647    9948 command_runner.go:130] ! I0127 12:12:01.494781       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0127 12:36:51.170647    9948 command_runner.go:130] ! I0127 12:12:01.494815       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0127 12:36:51.170734    9948 command_runner.go:130] ! I0127 12:12:01.494890       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0127 12:36:51.170734    9948 command_runner.go:130] ! I0127 12:12:01.495196       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0127 12:36:51.170734    9948 command_runner.go:130] ! I0127 12:12:01.495268       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0127 12:36:51.170804    9948 command_runner.go:130] ! I0127 12:12:01.495404       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0127 12:36:51.170804    9948 command_runner.go:130] ! I0127 12:12:01.495519       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0127 12:36:51.170804    9948 command_runner.go:130] ! I0127 12:12:01.640900       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0127 12:36:51.170804    9948 command_runner.go:130] ! I0127 12:12:01.641423       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0127 12:36:51.170905    9948 command_runner.go:130] ! I0127 12:12:01.641492       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:01.789671       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:01.790209       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:01.790224       1 shared_informer.go:313] Waiting for caches to sync for job
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:01.939873       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:01.940295       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:01.940375       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.099155       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.099654       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.099741       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.240427       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.240688       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.240725       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.390343       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.390438       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.390450       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.539643       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.539766       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.539778       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.691835       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.691969       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.739108       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.739143       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.739157       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.739400       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.739775       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.740069       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.890126       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.890235       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0127 12:36:51.171497    9948 command_runner.go:130] ! I0127 12:12:02.890247       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0127 12:36:51.171497    9948 command_runner.go:130] ! I0127 12:12:03.040125       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0127 12:36:51.171497    9948 command_runner.go:130] ! I0127 12:12:03.040770       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0127 12:36:51.171497    9948 command_runner.go:130] ! I0127 12:12:03.040983       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0127 12:36:51.171497    9948 command_runner.go:130] ! I0127 12:12:03.063768       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0127 12:36:51.171594    9948 command_runner.go:130] ! I0127 12:12:03.092877       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0127 12:36:51.171594    9948 command_runner.go:130] ! I0127 12:12:03.093448       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0127 12:36:51.171594    9948 command_runner.go:130] ! I0127 12:12:03.110720       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000\" does not exist"
	I0127 12:36:51.171655    9948 command_runner.go:130] ! I0127 12:12:03.126986       1 shared_informer.go:320] Caches are synced for endpoint
	I0127 12:36:51.171679    9948 command_runner.go:130] ! I0127 12:12:03.127087       1 shared_informer.go:320] Caches are synced for taint
	I0127 12:36:51.171679    9948 command_runner.go:130] ! I0127 12:12:03.127203       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.127313       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000"
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.127524       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.137503       1 shared_informer.go:320] Caches are synced for service account
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.137554       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.138208       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.138217       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.138352       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.141127       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.141405       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.141415       1 shared_informer.go:320] Caches are synced for TTL
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.141424       1 shared_informer.go:320] Caches are synced for ephemeral
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.141607       1 shared_informer.go:320] Caches are synced for daemon sets
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.141617       1 shared_informer.go:320] Caches are synced for stateful set
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.142442       1 shared_informer.go:320] Caches are synced for cronjob
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.146511       1 shared_informer.go:320] Caches are synced for persistent volume
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.150765       1 shared_informer.go:320] Caches are synced for expand
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.152122       1 shared_informer.go:320] Caches are synced for PVC protection
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.160180       1 shared_informer.go:320] Caches are synced for GC
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.164570       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.170520       1 shared_informer.go:320] Caches are synced for namespace
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.185040       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.186131       1 shared_informer.go:320] Caches are synced for HPA
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.188683       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.191196       1 shared_informer.go:320] Caches are synced for crt configmap
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.192089       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.192497       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.192682       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.192862       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.193013       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.193030       1 shared_informer.go:320] Caches are synced for job
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.193151       1 shared_informer.go:320] Caches are synced for deployment
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.193982       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.194157       1 shared_informer.go:320] Caches are synced for node
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.194244       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.194281       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.194310       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.194318       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.194846       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.196614       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:36:51.172238    9948 command_runner.go:130] ! I0127 12:12:03.197111       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0127 12:36:51.172238    9948 command_runner.go:130] ! I0127 12:12:03.197095       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0127 12:36:51.172278    9948 command_runner.go:130] ! I0127 12:12:03.199168       1 shared_informer.go:320] Caches are synced for disruption
	I0127 12:36:51.172278    9948 command_runner.go:130] ! I0127 12:12:03.200153       1 shared_informer.go:320] Caches are synced for attach detach
	I0127 12:36:51.172328    9948 command_runner.go:130] ! I0127 12:12:03.207229       1 shared_informer.go:320] Caches are synced for PV protection
	I0127 12:36:51.172328    9948 command_runner.go:130] ! I0127 12:12:03.214016       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-659000" podCIDRs=["10.244.0.0/24"]
	I0127 12:36:51.172362    9948 command_runner.go:130] ! I0127 12:12:03.214057       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.172362    9948 command_runner.go:130] ! I0127 12:12:03.214083       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.172390    9948 command_runner.go:130] ! I0127 12:12:03.216325       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0127 12:36:51.172424    9948 command_runner.go:130] ! I0127 12:12:03.840748       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.172424    9948 command_runner.go:130] ! I0127 12:12:04.356274       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="345.711056ms"
	I0127 12:36:51.172453    9948 command_runner.go:130] ! I0127 12:12:04.454747       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="97.841105ms"
	I0127 12:36:51.172479    9948 command_runner.go:130] ! I0127 12:12:04.534437       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="79.56576ms"
	I0127 12:36:51.172498    9948 command_runner.go:130] ! I0127 12:12:04.576528       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="41.959673ms"
	I0127 12:36:51.172554    9948 command_runner.go:130] ! I0127 12:12:04.576771       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="53.3µs"
	I0127 12:36:51.172586    9948 command_runner.go:130] ! I0127 12:12:26.045035       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.172625    9948 command_runner.go:130] ! I0127 12:12:26.074083       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.172625    9948 command_runner.go:130] ! I0127 12:12:26.085407       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="109.3µs"
	I0127 12:36:51.172625    9948 command_runner.go:130] ! I0127 12:12:26.129584       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="119.3µs"
	I0127 12:36:51.172681    9948 command_runner.go:130] ! I0127 12:12:27.964629       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="49.302µs"
	I0127 12:36:51.172703    9948 command_runner.go:130] ! I0127 12:12:28.020606       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="31.923176ms"
	I0127 12:36:51.172703    9948 command_runner.go:130] ! I0127 12:12:28.020971       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="110.703µs"
	I0127 12:36:51.172703    9948 command_runner.go:130] ! I0127 12:12:28.132341       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0127 12:36:51.172703    9948 command_runner.go:130] ! I0127 12:12:29.790464       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.172815    9948 command_runner.go:130] ! I0127 12:15:07.611410       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m02\" does not exist"
	I0127 12:36:51.172815    9948 command_runner.go:130] ! I0127 12:15:07.630009       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-659000-m02" podCIDRs=["10.244.1.0/24"]
	I0127 12:36:51.172873    9948 command_runner.go:130] ! I0127 12:15:07.631297       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.172873    9948 command_runner.go:130] ! I0127 12:15:07.631526       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.172873    9948 command_runner.go:130] ! I0127 12:15:07.655401       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.172873    9948 command_runner.go:130] ! I0127 12:15:07.883346       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.172954    9948 command_runner.go:130] ! I0127 12:15:08.169505       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000-m02"
	I0127 12:36:51.172954    9948 command_runner.go:130] ! I0127 12:15:08.255644       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.172954    9948 command_runner.go:130] ! I0127 12:15:08.418223       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.173007    9948 command_runner.go:130] ! I0127 12:15:17.811768       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.173007    9948 command_runner.go:130] ! I0127 12:15:36.752543       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:51.173007    9948 command_runner.go:130] ! I0127 12:15:36.753915       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.173090    9948 command_runner.go:130] ! I0127 12:15:36.769807       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.173090    9948 command_runner.go:130] ! I0127 12:15:38.199464       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.173090    9948 command_runner.go:130] ! I0127 12:15:38.449749       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.173165    9948 command_runner.go:130] ! I0127 12:16:02.550786       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="103.313802ms"
	I0127 12:36:51.173238    9948 command_runner.go:130] ! I0127 12:16:02.585867       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="34.67067ms"
	I0127 12:36:51.173238    9948 command_runner.go:130] ! I0127 12:16:02.586257       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="347.903µs"
	I0127 12:36:51.173238    9948 command_runner.go:130] ! I0127 12:16:02.588870       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="48.6µs"
	I0127 12:36:51.173238    9948 command_runner.go:130] ! I0127 12:16:05.434486       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="13.589639ms"
	I0127 12:36:51.173338    9948 command_runner.go:130] ! I0127 12:16:05.435765       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="54.401µs"
	I0127 12:36:51.173338    9948 command_runner.go:130] ! I0127 12:16:05.890170       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="9.003392ms"
	I0127 12:36:51.173338    9948 command_runner.go:130] ! I0127 12:16:05.890477       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="36.901µs"
	I0127 12:36:51.173338    9948 command_runner.go:130] ! I0127 12:16:09.305780       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.173409    9948 command_runner.go:130] ! I0127 12:16:33.434322       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.173433    9948 command_runner.go:130] ! I0127 12:19:26.820887       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.173433    9948 command_runner.go:130] ! I0127 12:19:54.916460       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:51.173483    9948 command_runner.go:130] ! I0127 12:19:54.917420       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m03\" does not exist"
	I0127 12:36:51.173649    9948 command_runner.go:130] ! I0127 12:19:54.965530       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-659000-m03" podCIDRs=["10.244.2.0/24"]
	I0127 12:36:51.173649    9948 command_runner.go:130] ! I0127 12:19:54.966061       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.173649    9948 command_runner.go:130] ! I0127 12:19:54.966297       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.173730    9948 command_runner.go:130] ! I0127 12:19:55.802981       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.173730    9948 command_runner.go:130] ! I0127 12:19:56.378698       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.173730    9948 command_runner.go:130] ! I0127 12:19:58.252320       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000-m03"
	I0127 12:36:51.173812    9948 command_runner.go:130] ! I0127 12:19:58.280410       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.173812    9948 command_runner.go:130] ! I0127 12:20:05.560777       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.173812    9948 command_runner.go:130] ! I0127 12:20:25.959831       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.173918    9948 command_runner.go:130] ! I0127 12:20:28.750598       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.173943    9948 command_runner.go:130] ! I0127 12:20:28.751325       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:51.173943    9948 command_runner.go:130] ! I0127 12:20:28.769163       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.173943    9948 command_runner.go:130] ! I0127 12:20:33.279397       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174005    9948 command_runner.go:130] ! I0127 12:23:26.795899       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.174060    9948 command_runner.go:130] ! I0127 12:24:32.956118       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.174060    9948 command_runner.go:130] ! I0127 12:25:42.001288       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174060    9948 command_runner.go:130] ! I0127 12:28:32.628178       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.174060    9948 command_runner.go:130] ! I0127 12:28:38.397672       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:51.174135    9948 command_runner.go:130] ! I0127 12:28:38.399092       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174135    9948 command_runner.go:130] ! I0127 12:28:38.428451       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174135    9948 command_runner.go:130] ! I0127 12:28:43.510900       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174208    9948 command_runner.go:130] ! I0127 12:29:38.000555       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.174231    9948 command_runner.go:130] ! I0127 12:30:52.866288       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:30:52.895359       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:30:58.140304       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:31:04.208510       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:31:04.209007       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m03\" does not exist"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:31:04.238560       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-659000-m03" podCIDRs=["10.244.3.0/24"]
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:31:04.238634       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! E0127 12:31:04.255963       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-659000-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-659000-m03" podCIDRs=["10.244.4.0/24"]
	I0127 12:36:51.174257    9948 command_runner.go:130] ! E0127 12:31:04.256068       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-659000-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! E0127 12:31:04.256109       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'multinode-659000-m03': failed to patch node CIDR: Node \"multinode-659000-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:31:04.256134       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:31:04.261242       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:31:04.513319       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:31:05.081710       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:31:08.523576       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:31:14.394811       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:31:22.407069       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:31:22.407472       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:31:22.419743       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:31:23.498434       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:33:08.544063       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:33:08.544656       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:33:08.574301       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:33:13.661256       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.199393    9948 logs.go:123] Gathering logs for kube-apiserver [ea993630a310] ...
	I0127 12:36:51.199393    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea993630a310"
	I0127 12:36:51.228398    9948 command_runner.go:130] ! W0127 12:35:38.851605       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0127 12:36:51.229061    9948 command_runner.go:130] ! I0127 12:35:38.853397       1 options.go:238] external host was not specified, using 172.29.198.106
	I0127 12:36:51.229061    9948 command_runner.go:130] ! I0127 12:35:38.858160       1 server.go:143] Version: v1.32.1
	I0127 12:36:51.229061    9948 command_runner.go:130] ! I0127 12:35:38.858493       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:51.229061    9948 command_runner.go:130] ! I0127 12:35:39.798695       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0127 12:36:51.229527    9948 command_runner.go:130] ! I0127 12:35:39.843688       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0127 12:36:51.229683    9948 command_runner.go:130] ! I0127 12:35:39.853521       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0127 12:36:51.230435    9948 command_runner.go:130] ! I0127 12:35:39.853736       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0127 12:36:51.230435    9948 command_runner.go:130] ! I0127 12:35:39.854572       1 instance.go:233] Using reconciler: lease
	I0127 12:36:51.230435    9948 command_runner.go:130] ! I0127 12:35:39.914509       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0127 12:36:51.231160    9948 command_runner.go:130] ! W0127 12:35:39.914792       1 genericapiserver.go:767] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.231160    9948 command_runner.go:130] ! I0127 12:35:40.232206       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0127 12:36:51.231259    9948 command_runner.go:130] ! I0127 12:35:40.232893       1 apis.go:106] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0127 12:36:51.231259    9948 command_runner.go:130] ! I0127 12:35:40.488401       1 apis.go:106] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0127 12:36:51.231259    9948 command_runner.go:130] ! I0127 12:35:40.610998       1 apis.go:106] API group "resource.k8s.io" is not enabled, skipping.
	I0127 12:36:51.231259    9948 command_runner.go:130] ! I0127 12:35:40.646097       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0127 12:36:51.231346    9948 command_runner.go:130] ! W0127 12:35:40.646401       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.231370    9948 command_runner.go:130] ! W0127 12:35:40.646556       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:51.231398    9948 command_runner.go:130] ! I0127 12:35:40.647499       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0127 12:36:51.231398    9948 command_runner.go:130] ! W0127 12:35:40.647580       1 genericapiserver.go:767] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.231398    9948 command_runner.go:130] ! I0127 12:35:40.648520       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0127 12:36:51.231398    9948 command_runner.go:130] ! I0127 12:35:40.649666       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0127 12:36:51.231398    9948 command_runner.go:130] ! W0127 12:35:40.649756       1 genericapiserver.go:767] Skipping API autoscaling/v2beta1 because it has no resources.
	I0127 12:36:51.231398    9948 command_runner.go:130] ! W0127 12:35:40.649766       1 genericapiserver.go:767] Skipping API autoscaling/v2beta2 because it has no resources.
	I0127 12:36:51.231398    9948 command_runner.go:130] ! I0127 12:35:40.651998       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0127 12:36:51.231398    9948 command_runner.go:130] ! W0127 12:35:40.652100       1 genericapiserver.go:767] Skipping API batch/v1beta1 because it has no resources.
	I0127 12:36:51.231398    9948 command_runner.go:130] ! I0127 12:35:40.653327       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0127 12:36:51.231398    9948 command_runner.go:130] ! W0127 12:35:40.653629       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.231398    9948 command_runner.go:130] ! W0127 12:35:40.653645       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:51.231398    9948 command_runner.go:130] ! I0127 12:35:40.654270       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0127 12:36:51.231398    9948 command_runner.go:130] ! W0127 12:35:40.654362       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.231398    9948 command_runner.go:130] ! W0127 12:35:40.654371       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1alpha2 because it has no resources.
	I0127 12:36:51.231398    9948 command_runner.go:130] ! I0127 12:35:40.655349       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0127 12:36:51.231398    9948 command_runner.go:130] ! W0127 12:35:40.655494       1 genericapiserver.go:767] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.231398    9948 command_runner.go:130] ! I0127 12:35:40.657969       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0127 12:36:51.231398    9948 command_runner.go:130] ! W0127 12:35:40.658067       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.231398    9948 command_runner.go:130] ! W0127 12:35:40.658077       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:51.231935    9948 command_runner.go:130] ! I0127 12:35:40.658845       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0127 12:36:51.231935    9948 command_runner.go:130] ! W0127 12:35:40.658940       1 genericapiserver.go:767] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.231998    9948 command_runner.go:130] ! W0127 12:35:40.658951       1 genericapiserver.go:767] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:51.231998    9948 command_runner.go:130] ! I0127 12:35:40.660043       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0127 12:36:51.231998    9948 command_runner.go:130] ! W0127 12:35:40.660172       1 genericapiserver.go:767] Skipping API policy/v1beta1 because it has no resources.
	I0127 12:36:51.232059    9948 command_runner.go:130] ! I0127 12:35:40.662431       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0127 12:36:51.232078    9948 command_runner.go:130] ! W0127 12:35:40.662519       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.232078    9948 command_runner.go:130] ! W0127 12:35:40.662531       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:51.232078    9948 command_runner.go:130] ! I0127 12:35:40.663022       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0127 12:36:51.232078    9948 command_runner.go:130] ! W0127 12:35:40.663153       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.232078    9948 command_runner.go:130] ! W0127 12:35:40.663165       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:51.232174    9948 command_runner.go:130] ! I0127 12:35:40.666344       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0127 12:36:51.232174    9948 command_runner.go:130] ! W0127 12:35:40.666495       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.232174    9948 command_runner.go:130] ! W0127 12:35:40.666521       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:51.232230    9948 command_runner.go:130] ! I0127 12:35:40.668345       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0127 12:36:51.232254    9948 command_runner.go:130] ! W0127 12:35:40.668516       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta3 because it has no resources.
	I0127 12:36:51.232254    9948 command_runner.go:130] ! W0127 12:35:40.668527       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0127 12:36:51.232316    9948 command_runner.go:130] ! W0127 12:35:40.668531       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.232316    9948 command_runner.go:130] ! I0127 12:35:40.673502       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0127 12:36:51.232316    9948 command_runner.go:130] ! W0127 12:35:40.673587       1 genericapiserver.go:767] Skipping API apps/v1beta2 because it has no resources.
	I0127 12:36:51.232316    9948 command_runner.go:130] ! W0127 12:35:40.673597       1 genericapiserver.go:767] Skipping API apps/v1beta1 because it has no resources.
	I0127 12:36:51.232370    9948 command_runner.go:130] ! I0127 12:35:40.676193       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0127 12:36:51.232397    9948 command_runner.go:130] ! W0127 12:35:40.676284       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.232397    9948 command_runner.go:130] ! W0127 12:35:40.676294       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:51.232397    9948 command_runner.go:130] ! I0127 12:35:40.677186       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0127 12:36:51.232397    9948 command_runner.go:130] ! W0127 12:35:40.677276       1 genericapiserver.go:767] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.232457    9948 command_runner.go:130] ! I0127 12:35:40.688978       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0127 12:36:51.232457    9948 command_runner.go:130] ! W0127 12:35:40.689072       1 genericapiserver.go:767] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.232537    9948 command_runner.go:130] ! I0127 12:35:41.320439       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.320849       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.321234       1 secure_serving.go:213] Serving securely on [::]:8443
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.321512       1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.324372       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.325924       1 controller.go:119] Starting legacy_token_tracking_controller
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.326193       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.327573       1 remote_available_controller.go:411] Starting RemoteAvailability controller
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.328217       1 cache.go:32] Waiting for caches to sync for RemoteAvailability controller
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.328319       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.329060       1 cluster_authentication_trust_controller.go:462] Starting cluster_authentication_trust_controller controller
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.329095       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.329225       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.329996       1 controller.go:78] Starting OpenAPI AggregationController
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.330057       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.330085       1 apf_controller.go:377] Starting API Priority and Fairness config controller
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.330333       1 apiservice_controller.go:100] Starting APIServiceRegistrationController
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.330379       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.331391       1 customresource_discovery_controller.go:292] Starting DiscoveryController
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.331485       1 dynamic_serving_content.go:135] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.327929       1 local_available_controller.go:156] Starting LocalAvailability controller
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.333671       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.333703       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.333958       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.335863       1 controller.go:142] Starting OpenAPI controller
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.336704       1 controller.go:90] Starting OpenAPI V3 controller
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.336831       1 naming_controller.go:294] Starting NamingConditionController
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.337057       1 establishing_controller.go:81] Starting EstablishingController
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.337215       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.337324       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.337408       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.327968       1 aggregator.go:169] waiting for initial CRD sync...
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.387084       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.387441       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.450926       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.451366       1 policy_source.go:240] refreshing policies
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.488750       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0127 12:36:51.233093    9948 command_runner.go:130] ! I0127 12:35:41.488990       1 aggregator.go:171] initial CRD sync complete...
	I0127 12:36:51.233093    9948 command_runner.go:130] ! I0127 12:35:41.489245       1 autoregister_controller.go:144] Starting autoregister controller
	I0127 12:36:51.233093    9948 command_runner.go:130] ! I0127 12:35:41.489480       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:41.489653       1 cache.go:39] Caches are synced for autoregister controller
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:41.499151       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:41.527390       1 shared_informer.go:320] Caches are synced for configmaps
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:41.528625       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:41.529892       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:41.530639       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:41.531604       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:41.531638       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:41.534721       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:41.540933       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:41.545944       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:42.357869       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:42.374307       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0127 12:36:51.233136    9948 command_runner.go:130] ! W0127 12:35:43.074223       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.29.198.106]
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:43.075938       1 controller.go:615] quota admission added evaluator for: endpoints
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:43.085006       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:44.603084       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:44.989601       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:45.141450       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:45.327075       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:45.338333       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 12:36:51.241939    9948 logs.go:123] Gathering logs for coredns [b3a9ed6e130c] ...
	I0127 12:36:51.242464    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a9ed6e130c"
	I0127 12:36:51.269635    9948 command_runner.go:130] > .:53
	I0127 12:36:51.269635    9948 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 5e2e325279dfa828a8fd1b44d83ab4703abb0247d4beadde42157147650fe687c0862eaa4caa15a5d9139c48c9a9dd5ec3cd962ba60368e8ffb4d02ae4d29aeb
	I0127 12:36:51.269635    9948 command_runner.go:130] > CoreDNS-1.11.3
	I0127 12:36:51.269635    9948 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0127 12:36:51.269816    9948 command_runner.go:130] > [INFO] 127.0.0.1:47464 - 34099 "HINFO IN 5313391549706874198.1206200090770907475. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.062040871s
	I0127 12:36:51.270073    9948 logs.go:123] Gathering logs for kube-scheduler [a16e06a03860] ...
	I0127 12:36:51.270073    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a16e06a03860"
	I0127 12:36:51.298154    9948 command_runner.go:130] ! I0127 12:11:54.280431       1 serving.go:386] Generated self-signed cert in-memory
	I0127 12:36:51.298154    9948 command_runner.go:130] ! W0127 12:11:55.581187       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.581309       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.581382       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.581390       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 12:36:51.299138    9948 command_runner.go:130] ! I0127 12:11:55.694969       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! I0127 12:11:55.695193       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:51.299138    9948 command_runner.go:130] ! I0127 12:11:55.700077       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0127 12:36:51.299138    9948 command_runner.go:130] ! I0127 12:11:55.700446       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! I0127 12:11:55.700992       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:36:51.299138    9948 command_runner.go:130] ! I0127 12:11:55.701410       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.715521       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:51.299138    9948 command_runner.go:130] ! E0127 12:11:55.717196       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.717649       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0127 12:36:51.299138    9948 command_runner.go:130] ! E0127 12:11:55.717921       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.718583       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0127 12:36:51.299138    9948 command_runner.go:130] ! E0127 12:11:55.718820       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.728298       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! E0127 12:11:55.728648       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.729000       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0127 12:36:51.299138    9948 command_runner.go:130] ! E0127 12:11:55.729243       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.729633       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0127 12:36:51.299138    9948 command_runner.go:130] ! E0127 12:11:55.730380       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.729677       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0127 12:36:51.299138    9948 command_runner.go:130] ! E0127 12:11:55.730837       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.729713       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.729749       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:51.299138    9948 command_runner.go:130] ! E0127 12:11:55.731479       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.729782       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:51.299138    9948 command_runner.go:130] ! E0127 12:11:55.732242       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.729811       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:51.299138    9948 command_runner.go:130] ! E0127 12:11:55.734240       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! E0127 12:11:55.734704       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.738077       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0127 12:36:51.299138    9948 command_runner.go:130] ! E0127 12:11:55.738873       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.739202       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0127 12:36:51.299138    9948 command_runner.go:130] ! E0127 12:11:55.739366       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.739719       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0127 12:36:51.300135    9948 command_runner.go:130] ! E0127 12:11:55.739865       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.300135    9948 command_runner.go:130] ! W0127 12:11:55.740221       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:55.740378       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:55.740608       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:55.740761       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:56.556598       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:56.557622       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:56.595830       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:56.596047       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:56.691826       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:56.691909       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:56.806048       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:56.806109       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:56.846817       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:56.847194       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:56.871314       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:56.872178       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:56.887386       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:56.887549       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:56.918642       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:56.919135       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:57.039216       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:57.039707       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:57.055169       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:57.055233       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:57.106656       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:57.106828       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:57.214186       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:57.214290       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:57.298150       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:57.298337       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:57.310098       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0127 12:36:51.303142    9948 command_runner.go:130] ! E0127 12:11:57.310312       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.303142    9948 command_runner.go:130] ! W0127 12:11:57.312117       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:51.303142    9948 command_runner.go:130] ! E0127 12:11:57.312192       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.303142    9948 command_runner.go:130] ! W0127 12:11:57.321525       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0127 12:36:51.303142    9948 command_runner.go:130] ! E0127 12:11:57.321832       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.303142    9948 command_runner.go:130] ! I0127 12:11:59.701790       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:36:51.303142    9948 command_runner.go:130] ! I0127 12:33:15.443053       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0127 12:36:51.303142    9948 command_runner.go:130] ! I0127 12:33:15.443143       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0127 12:36:51.303142    9948 command_runner.go:130] ! I0127 12:33:15.452458       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 12:36:51.303142    9948 command_runner.go:130] ! E0127 12:33:15.487412       1 run.go:72] "command failed" err="finished without leader elect"
	I0127 12:36:51.314139    9948 logs.go:123] Gathering logs for kindnet [373bec67270f] ...
	I0127 12:36:51.314139    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 373bec67270f"
	I0127 12:36:51.347179    9948 command_runner.go:130] ! I0127 12:35:44.464092       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0127 12:36:51.347179    9948 command_runner.go:130] ! I0127 12:35:44.489651       1 main.go:139] hostIP = 172.29.198.106
	I0127 12:36:51.347261    9948 command_runner.go:130] ! podIP = 172.29.198.106
	I0127 12:36:51.347261    9948 command_runner.go:130] ! I0127 12:35:44.489794       1 main.go:148] setting mtu 1500 for CNI 
	I0127 12:36:51.347261    9948 command_runner.go:130] ! I0127 12:35:44.489865       1 main.go:178] kindnetd IP family: "ipv4"
	I0127 12:36:51.347261    9948 command_runner.go:130] ! I0127 12:35:44.490024       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0127 12:36:51.347261    9948 command_runner.go:130] ! I0127 12:35:45.397363       1 main.go:239] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-40: Error: Could not process rule: Operation not supported
	I0127 12:36:51.347323    9948 command_runner.go:130] ! add table inet kindnet-network-policies
	I0127 12:36:51.347323    9948 command_runner.go:130] ! ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:51.347323    9948 command_runner.go:130] ! , skipping network policies
	I0127 12:36:51.347373    9948 command_runner.go:130] ! W0127 12:36:15.407551       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0127 12:36:51.347373    9948 command_runner.go:130] ! E0127 12:36:15.407870       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError"
	I0127 12:36:51.347373    9948 command_runner.go:130] ! I0127 12:36:25.405793       1 main.go:297] Handling node with IPs: map[172.29.198.106:{}]
	I0127 12:36:51.347457    9948 command_runner.go:130] ! I0127 12:36:25.405967       1 main.go:301] handling current node
	I0127 12:36:51.347457    9948 command_runner.go:130] ! I0127 12:36:25.406822       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:51.347457    9948 command_runner.go:130] ! I0127 12:36:25.406903       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:51.347457    9948 command_runner.go:130] ! I0127 12:36:25.408014       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.29.199.129 Flags: [] Table: 0 Realm: 0} 
	I0127 12:36:51.347508    9948 command_runner.go:130] ! I0127 12:36:25.408956       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:51.347549    9948 command_runner.go:130] ! I0127 12:36:25.409055       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:51.347549    9948 command_runner.go:130] ! I0127 12:36:25.409321       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.29.206.88 Flags: [] Table: 0 Realm: 0} 
	I0127 12:36:51.347620    9948 command_runner.go:130] ! I0127 12:36:35.400986       1 main.go:297] Handling node with IPs: map[172.29.198.106:{}]
	I0127 12:36:51.347620    9948 command_runner.go:130] ! I0127 12:36:35.401115       1 main.go:301] handling current node
	I0127 12:36:51.347620    9948 command_runner.go:130] ! I0127 12:36:35.401203       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:51.347620    9948 command_runner.go:130] ! I0127 12:36:35.401377       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:51.347620    9948 command_runner.go:130] ! I0127 12:36:35.401789       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:51.347701    9948 command_runner.go:130] ! I0127 12:36:35.401927       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:51.347701    9948 command_runner.go:130] ! I0127 12:36:45.400837       1 main.go:297] Handling node with IPs: map[172.29.198.106:{}]
	I0127 12:36:51.347723    9948 command_runner.go:130] ! I0127 12:36:45.401002       1 main.go:301] handling current node
	I0127 12:36:51.347723    9948 command_runner.go:130] ! I0127 12:36:45.401061       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:51.347748    9948 command_runner.go:130] ! I0127 12:36:45.401072       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:51.347748    9948 command_runner.go:130] ! I0127 12:36:45.401385       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:51.347748    9948 command_runner.go:130] ! I0127 12:36:45.401462       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:53.862498    9948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:53.890742    9948 command_runner.go:130] > 2017
	I0127 12:36:53.890742    9948 api_server.go:72] duration metric: took 1m6.911385s to wait for apiserver process to appear ...
	I0127 12:36:53.890742    9948 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:36:53.899408    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 12:36:53.927140    9948 command_runner.go:130] > ea993630a310
	I0127 12:36:53.927244    9948 logs.go:282] 1 containers: [ea993630a310]
	I0127 12:36:53.936808    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 12:36:53.962367    9948 command_runner.go:130] > 0ef2a3b50bae
	I0127 12:36:53.962446    9948 logs.go:282] 1 containers: [0ef2a3b50bae]
	I0127 12:36:53.970030    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 12:36:53.993916    9948 command_runner.go:130] > b3a9ed6e130c
	I0127 12:36:53.993916    9948 command_runner.go:130] > f818dd15d8b0
	I0127 12:36:53.993916    9948 logs.go:282] 2 containers: [b3a9ed6e130c f818dd15d8b0]
	I0127 12:36:54.001905    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 12:36:54.027794    9948 command_runner.go:130] > ed51c7eaa966
	I0127 12:36:54.027794    9948 command_runner.go:130] > a16e06a03860
	I0127 12:36:54.027794    9948 logs.go:282] 2 containers: [ed51c7eaa966 a16e06a03860]
	I0127 12:36:54.034908    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 12:36:54.063526    9948 command_runner.go:130] > 0283b35dee3c
	I0127 12:36:54.063526    9948 command_runner.go:130] > bbec7ccef7da
	I0127 12:36:54.063526    9948 logs.go:282] 2 containers: [0283b35dee3c bbec7ccef7da]
	I0127 12:36:54.071374    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 12:36:54.099256    9948 command_runner.go:130] > 8d4872cda28d
	I0127 12:36:54.099337    9948 command_runner.go:130] > e07a66f8f619
	I0127 12:36:54.099337    9948 logs.go:282] 2 containers: [8d4872cda28d e07a66f8f619]
	I0127 12:36:54.108236    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0127 12:36:54.134342    9948 command_runner.go:130] > 373bec67270f
	I0127 12:36:54.134342    9948 command_runner.go:130] > d758000dda95
	I0127 12:36:54.134342    9948 logs.go:282] 2 containers: [373bec67270f d758000dda95]
	I0127 12:36:54.135331    9948 logs.go:123] Gathering logs for kubelet ...
	I0127 12:36:54.135331    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:32 multinode-659000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1507]: I0127 12:35:33.096330    1507 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1507]: I0127 12:35:33.097069    1507 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1507]: I0127 12:35:33.098504    1507 server.go:954] "Client rotation is on, will bootstrap in background"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1507]: E0127 12:35:33.099084    1507 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1565]: I0127 12:35:33.855505    1565 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1565]: I0127 12:35:33.856023    1565 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1565]: I0127 12:35:33.856456    1565 server.go:954] "Client rotation is on, will bootstrap in background"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1565]: E0127 12:35:33.856573    1565 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:34 multinode-659000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.167839    1648 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.168570    1648 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.169526    1648 server.go:954] "Client rotation is on, will bootstrap in background"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.171330    1648 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.190537    1648 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.208219    1648 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.208354    1648 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.217489    1648 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.217603    1648 server.go:841] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.218319    1648 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.218396    1648 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-659000","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.218720    1648 topology_manager.go:138] "Creating topology manager with none policy"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.218780    1648 container_manager_linux.go:304] "Creating device plugin manager"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.219430    1648 state_mem.go:36] "Initialized new in-memory state store"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.221396    1648 kubelet.go:446] "Attempting to sync node with API server"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.221465    1648 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.221524    1648 kubelet.go:352] "Adding apiserver pod source"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.221568    1648 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.230949    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-659000&limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.231085    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-659000&limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.232363    1648 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="docker" version="27.4.0" apiVersion="v1"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.236967    1648 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.237190    1648 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.245589    1648 watchdog_linux.go:99] "Systemd watchdog is not enabled"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.245760    1648 server.go:1287] "Started kubelet"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.246317    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.246411    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.246814    1648 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.247495    1648 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.249106    1648 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.260914    1648 server.go:169] "Starting to listen" address="0.0.0.0" port=10250
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.262947    1648 server.go:490] "Adding debug handlers to kubelet server"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.264052    1648 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.267083    1648 volume_manager.go:297] "Starting Kubelet Volume Manager"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.267485    1648 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"multinode-659000\" not found"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.270946    1648 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.29.198.106:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-659000.181e8cd12d2fa1af  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-659000,UID:multinode-659000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-659000,},FirstTimestamp:2025-01-27 12:35:36.245739951 +0000 UTC m=+0.150414507,LastTimestamp:2025-01-27 12:35:36.245739951 +0000 UTC m=+0.150414507,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-6
59000,}"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.275270    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-659000?timeout=10s\": dial tcp 172.29.198.106:8443: connect: connection refused" interval="200ms"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.275715    1648 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.280615    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.280911    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.282354    1648 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.282424    1648 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.282441    1648 factory.go:221] Registration of the systemd container factory successfully
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.345823    1648 reconciler.go:26] "Reconciler: start to sync state"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.348883    1648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.352701    1648 cpu_manager.go:221] "Starting CPU manager" policy="none"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.352736    1648 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.352866    1648 state_mem.go:36] "Initialized new in-memory state store"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353577    1648 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353729    1648 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353769    1648 policy_none.go:49] "None policy: Start"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353902    1648 memory_manager.go:186] "Starting memorymanager" policy="None"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353967    1648 state_mem.go:35] "Initializing new in-memory state store"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.354751    1648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.354791    1648 status_manager.go:227] "Starting to sync pod status with apiserver"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.354811    1648 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.354819    1648 kubelet.go:2388] "Starting kubelet main sync loop"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.354862    1648 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.355393    1648 state_mem.go:75] "Updated machine memory state"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.358802    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.358857    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.371233    1648 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"multinode-659000\" not found"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.373395    1648 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.373786    1648 eviction_manager.go:189] "Eviction manager: starting control loop"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.373887    1648 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.380088    1648 iptables.go:577] "Could not set up iptables canary" err=<
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.380760    1648 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.380984    1648 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-659000\" not found"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.382902    1648 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.468172    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.468821    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c82c0ec4aeaa9b21462a8248326ae982d6f7a0aee31347f1a58d216f0335177"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.468934    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2d0bd65fe50d3b8a64acf8ee065aa49d1a51b768c5fe6fe9532d26fa35aa7b1"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.468988    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bd5bf99bede3e691e572fc4b8a37f4f42f8a9b2520adf8bc87bdf76e8258a4b"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.469050    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5423fc5113290b937df9b531c5fbd748c5d927fd5e170e8126b67bae6a814384"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.470252    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.475717    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.477090    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-659000?timeout=10s\": dial tcp 172.29.198.106:8443: connect: connection refused" interval="400ms"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.480196    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.198.106:8443: connect: connection refused" node="multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.487429    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc9ef8ee86ec2e354006c4c56f82fe9ec4df472096628ad620faba06fa0b1ff8"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.508448    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a53e133a1cd6ab9514cb15ac3c4f1d5683d17008b482cebb08bf4809e060709"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.523288    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="319cddeebceb6ec82b5865f1c67eaf88948a282ace1113869910f5bf8c717d83"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.545844    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b522c4c9f4c776ea35298b9eaf7c05d64bddd6f385e12252bdf6aada9a3e20d"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.566476    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e6c90fc43fa6c0754218ff1c4162045d-kubeconfig\") pod \"kube-scheduler-multinode-659000\" (UID: \"e6c90fc43fa6c0754218ff1c4162045d\") " pod="kube-system/kube-scheduler-multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.566534    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b9fbd177058ba298cde2a92c4ef5c601-k8s-certs\") pod \"kube-apiserver-multinode-659000\" (UID: \"b9fbd177058ba298cde2a92c4ef5c601\") " pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.566560    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-kubeconfig\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567472    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567527    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/575cefa3aa8017dce576fa244e719a4e-etcd-certs\") pod \"etcd-multinode-659000\" (UID: \"575cefa3aa8017dce576fa244e719a4e\") " pod="kube-system/etcd-multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567546    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/575cefa3aa8017dce576fa244e719a4e-etcd-data\") pod \"etcd-multinode-659000\" (UID: \"575cefa3aa8017dce576fa244e719a4e\") " pod="kube-system/etcd-multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567563    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b9fbd177058ba298cde2a92c4ef5c601-ca-certs\") pod \"kube-apiserver-multinode-659000\" (UID: \"b9fbd177058ba298cde2a92c4ef5c601\") " pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567580    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-ca-certs\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567687    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-flexvolume-dir\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567720    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-k8s-certs\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567745    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b9fbd177058ba298cde2a92c4ef5c601-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-659000\" (UID: \"b9fbd177058ba298cde2a92c4ef5c601\") " pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567166    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51ee4649b24aa281b3767c049c3c1d4063e516b98501648152da39ee45cb0b26"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.569350    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.570289    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.681872    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.682569    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.198.106:8443: connect: connection refused" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.878668    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-659000?timeout=10s\": dial tcp 172.29.198.106:8443: connect: connection refused" interval="800ms"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: W0127 12:35:37.056372    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.056534    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: I0127 12:35:37.084276    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.085344    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.198.106:8443: connect: connection refused" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: W0127 12:35:37.281985    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.282078    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: W0127 12:35:37.629266    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-659000&limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.629409    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-659000&limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: W0127 12:35:37.673700    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.673876    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.680515    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-659000?timeout=10s\": dial tcp 172.29.198.106:8443: connect: connection refused" interval="1.6s"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: I0127 12:35:37.887498    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.888458    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.198.106:8443: connect: connection refused" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: E0127 12:35:39.058364    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: E0127 12:35:39.084210    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: E0127 12:35:39.099659    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: E0127 12:35:39.112572    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: I0127 12:35:39.489967    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:40 multinode-659000 kubelet[1648]: E0127 12:35:40.123734    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:40 multinode-659000 kubelet[1648]: E0127 12:35:40.124212    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:40 multinode-659000 kubelet[1648]: E0127 12:35:40.124507    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:40 multinode-659000 kubelet[1648]: E0127 12:35:40.124790    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.138584    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.139346    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.139719    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.469180    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.513020    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-multinode-659000\" already exists" pod="kube-system/etcd-multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.513064    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.538800    1648 kubelet_node_status.go:125] "Node was previously registered" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.538905    1648 kubelet_node_status.go:79] "Successfully registered node" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.538949    1648 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.539897    1648 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.540655    1648 setters.go:602] "Node became not ready" node="multinode-659000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-27T12:35:41Z","lastTransitionTime":"2025-01-27T12:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.555833    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-multinode-659000\" already exists" pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.555924    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.574323    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-multinode-659000\" already exists" pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.574484    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-multinode-659000"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.589698    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-multinode-659000\" already exists" pod="kube-system/kube-scheduler-multinode-659000"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.247993    1648 apiserver.go:52] "Watching apiserver"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.255092    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-659000" podUID="f19e9efc-57cc-4e2a-b365-920592a7f352"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.257281    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.257504    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.261197    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/etcd-multinode-659000" podUID="d2a9c448-86a1-48e3-8b48-345c937e5bb4"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.277187    1648 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.304401    1648 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/etcd-multinode-659000"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.304607    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-659000"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.309849    1648 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.309963    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.343249    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae3b8daf-d674-4cfe-8652-cb5ff6ba8615-lib-modules\") pod \"kube-proxy-s46mv\" (UID: \"ae3b8daf-d674-4cfe-8652-cb5ff6ba8615\") " pod="kube-system/kube-proxy-s46mv"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.343617    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9b617a9c-e2b8-45fd-bee2-45cb03d4cd42-cni-cfg\") pod \"kindnet-z2hqq\" (UID: \"9b617a9c-e2b8-45fd-bee2-45cb03d4cd42\") " pod="kube-system/kindnet-z2hqq"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.343779    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b617a9c-e2b8-45fd-bee2-45cb03d4cd42-lib-modules\") pod \"kindnet-z2hqq\" (UID: \"9b617a9c-e2b8-45fd-bee2-45cb03d4cd42\") " pod="kube-system/kindnet-z2hqq"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.343961    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae3b8daf-d674-4cfe-8652-cb5ff6ba8615-xtables-lock\") pod \"kube-proxy-s46mv\" (UID: \"ae3b8daf-d674-4cfe-8652-cb5ff6ba8615\") " pod="kube-system/kube-proxy-s46mv"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.344263    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b617a9c-e2b8-45fd-bee2-45cb03d4cd42-xtables-lock\") pod \"kindnet-z2hqq\" (UID: \"9b617a9c-e2b8-45fd-bee2-45cb03d4cd42\") " pod="kube-system/kindnet-z2hqq"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.344443    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bcfd7913-1bc0-4c24-882f-2be92ec9b046-tmp\") pod \"storage-provisioner\" (UID: \"bcfd7913-1bc0-4c24-882f-2be92ec9b046\") " pod="kube-system/storage-provisioner"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.345456    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.345573    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:42.845554363 +0000 UTC m=+6.750229019 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.362165    1648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bf31ca1befb4fb3e8f2fd27458a3b80" path="/var/lib/kubelet/pods/6bf31ca1befb4fb3e8f2fd27458a3b80/volumes"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.363294    1648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7291ea72d8be6e47ed8b536906d73549" path="/var/lib/kubelet/pods/7291ea72d8be6e47ed8b536906d73549/volumes"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.396667    1648 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.400478    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.400505    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.400550    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:42.900534148 +0000 UTC m=+6.805208804 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.494698    1648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-659000" podStartSLOduration=0.494540064 podStartE2EDuration="494.540064ms" podCreationTimestamp="2025-01-27 12:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 12:35:42.473709794 +0000 UTC m=+6.378384350" watchObservedRunningTime="2025-01-27 12:35:42.494540064 +0000 UTC m=+6.399214620"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.494964    1648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-659000" podStartSLOduration=0.494955765 podStartE2EDuration="494.955765ms" podCreationTimestamp="2025-01-27 12:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 12:35:42.493805361 +0000 UTC m=+6.398480017" watchObservedRunningTime="2025-01-27 12:35:42.494955765 +0000 UTC m=+6.399630321"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.849608    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.849827    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:43.849803559 +0000 UTC m=+7.754478115 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.951539    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.951579    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.951637    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:43.951620201 +0000 UTC m=+7.856294757 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: I0127 12:35:43.230846    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b613e9a7a356580fd5381e358408317fd6120a119c23f3f196adda302e5ca97f"
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: I0127 12:35:43.240666    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34d579bb511fec290478f20b13002063b43c1a71bd6f2f45f1d83bbd8ac971ab"
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.588436    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: I0127 12:35:43.594121    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d43e4cc62e0877d4b65191623d58195cd33c60eff33c6e49e605f69620d5115f"
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: I0127 12:35:43.594816    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-659000" podUID="f19e9efc-57cc-4e2a-b365-920592a7f352"
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.861607    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.861754    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:45.861734662 +0000 UTC m=+9.766409318 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.962791    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.962845    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.963033    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:45.962955102 +0000 UTC m=+9.867629758 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:44 multinode-659000 kubelet[1648]: E0127 12:35:44.356390    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.355639    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.883867    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.883991    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:49.883972962 +0000 UTC m=+13.788647618 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.984260    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.984313    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.984377    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:49.984359299 +0000 UTC m=+13.889033855 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:46 multinode-659000 kubelet[1648]: E0127 12:35:46.358731    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:46 multinode-659000 kubelet[1648]: E0127 12:35:46.386967    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:47 multinode-659000 kubelet[1648]: E0127 12:35:47.355582    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:48 multinode-659000 kubelet[1648]: E0127 12:35:48.356308    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:49 multinode-659000 kubelet[1648]: E0127 12:35:49.356027    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:49 multinode-659000 kubelet[1648]: E0127 12:35:49.925365    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:49 multinode-659000 kubelet[1648]: E0127 12:35:49.925459    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:57.925443152 +0000 UTC m=+21.830117808 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:50 multinode-659000 kubelet[1648]: E0127 12:35:50.027100    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:50 multinode-659000 kubelet[1648]: E0127 12:35:50.027219    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:50 multinode-659000 kubelet[1648]: E0127 12:35:50.027346    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:58.027289813 +0000 UTC m=+21.931964469 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:50 multinode-659000 kubelet[1648]: E0127 12:35:50.355319    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:51 multinode-659000 kubelet[1648]: E0127 12:35:51.356503    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:51 multinode-659000 kubelet[1648]: E0127 12:35:51.388594    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:52 multinode-659000 kubelet[1648]: E0127 12:35:52.357390    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:53 multinode-659000 kubelet[1648]: E0127 12:35:53.355568    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:54 multinode-659000 kubelet[1648]: E0127 12:35:54.355531    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:55 multinode-659000 kubelet[1648]: E0127 12:35:55.356228    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:56 multinode-659000 kubelet[1648]: E0127 12:35:56.355726    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:56 multinode-659000 kubelet[1648]: E0127 12:35:56.392446    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:57 multinode-659000 kubelet[1648]: E0127 12:35:57.355790    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.001233    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.001401    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:36:14.001383565 +0000 UTC m=+37.906058121 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.101493    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.101659    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.101748    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:36:14.101732786 +0000 UTC m=+38.006407342 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.365026    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:59 multinode-659000 kubelet[1648]: E0127 12:35:59.356031    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:36:00 multinode-659000 kubelet[1648]: E0127 12:36:00.356282    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:36:01 multinode-659000 kubelet[1648]: E0127 12:36:01.356209    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:36:01 multinode-659000 kubelet[1648]: E0127 12:36:01.394292    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:36:02 multinode-659000 kubelet[1648]: E0127 12:36:02.355777    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:36:03 multinode-659000 kubelet[1648]: E0127 12:36:03.356166    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:36:04 multinode-659000 kubelet[1648]: E0127 12:36:04.356089    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:36:05 multinode-659000 kubelet[1648]: E0127 12:36:05.355458    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:36:06 multinode-659000 kubelet[1648]: E0127 12:36:06.356120    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:36:06 multinode-659000 kubelet[1648]: E0127 12:36:06.396811    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:36:07 multinode-659000 kubelet[1648]: E0127 12:36:07.355573    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:36:08 multinode-659000 kubelet[1648]: E0127 12:36:08.355837    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:09 multinode-659000 kubelet[1648]: E0127 12:36:09.355284    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:10 multinode-659000 kubelet[1648]: E0127 12:36:10.356199    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:11 multinode-659000 kubelet[1648]: E0127 12:36:11.356023    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:11 multinode-659000 kubelet[1648]: E0127 12:36:11.398054    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:12 multinode-659000 kubelet[1648]: E0127 12:36:12.355492    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:13 multinode-659000 kubelet[1648]: E0127 12:36:13.356291    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.058689    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.058911    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:36:46.058858304 +0000 UTC m=+69.963532860 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.159091    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.159277    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.159495    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:36:46.15947175 +0000 UTC m=+70.064146406 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.357000    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:15 multinode-659000 kubelet[1648]: I0127 12:36:15.031682    1648 scope.go:117] "RemoveContainer" containerID="134620caeeb93fda5b32a71962e13d1994830a35b93b18ad2387296500dff7b5"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:15 multinode-659000 kubelet[1648]: I0127 12:36:15.032024    1648 scope.go:117] "RemoveContainer" containerID="9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:15 multinode-659000 kubelet[1648]: E0127 12:36:15.032236    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bcfd7913-1bc0-4c24-882f-2be92ec9b046)\"" pod="kube-system/storage-provisioner" podUID="bcfd7913-1bc0-4c24-882f-2be92ec9b046"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:15 multinode-659000 kubelet[1648]: E0127 12:36:15.355738    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:16 multinode-659000 kubelet[1648]: E0127 12:36:16.356191    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:16 multinode-659000 kubelet[1648]: E0127 12:36:16.399212    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:17 multinode-659000 kubelet[1648]: E0127 12:36:17.355082    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:18 multinode-659000 kubelet[1648]: E0127 12:36:18.356067    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:19 multinode-659000 kubelet[1648]: E0127 12:36:19.355675    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:20 multinode-659000 kubelet[1648]: E0127 12:36:20.356455    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:21 multinode-659000 kubelet[1648]: E0127 12:36:21.355971    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:21 multinode-659000 kubelet[1648]: E0127 12:36:21.401078    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:22 multinode-659000 kubelet[1648]: E0127 12:36:22.355954    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:23 multinode-659000 kubelet[1648]: E0127 12:36:23.355387    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:24 multinode-659000 kubelet[1648]: E0127 12:36:24.355437    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:25 multinode-659000 kubelet[1648]: E0127 12:36:25.356289    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:26 multinode-659000 kubelet[1648]: E0127 12:36:26.356493    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:26 multinode-659000 kubelet[1648]: E0127 12:36:26.402364    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 kubelet[1648]: E0127 12:36:27.356407    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.179341    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 kubelet[1648]: I0127 12:36:27.357050    1648 scope.go:117] "RemoveContainer" containerID="9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f"
	I0127 12:36:54.179341    9948 command_runner.go:130] > Jan 27 12:36:28 multinode-659000 kubelet[1648]: E0127 12:36:28.356371    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.179341    9948 command_runner.go:130] > Jan 27 12:36:29 multinode-659000 kubelet[1648]: E0127 12:36:29.355555    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.179341    9948 command_runner.go:130] > Jan 27 12:36:30 multinode-659000 kubelet[1648]: E0127 12:36:30.356227    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.179341    9948 command_runner.go:130] > Jan 27 12:36:31 multinode-659000 kubelet[1648]: E0127 12:36:31.356043    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.179341    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]: I0127 12:36:36.363314    1648 scope.go:117] "RemoveContainer" containerID="5f274e5a8851d2aeb5403952c3fba0274fe53614e2e0995d1046693d7e725d5d"
	I0127 12:36:54.179341    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]: E0127 12:36:36.393311    1648 iptables.go:577] "Could not set up iptables canary" err=<
	I0127 12:36:54.179341    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0127 12:36:54.179341    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0127 12:36:54.179341    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0127 12:36:54.179341    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0127 12:36:54.179341    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]: I0127 12:36:36.409087    1648 scope.go:117] "RemoveContainer" containerID="f91e9c2d3ba64a6d34c9bab7c1953b46f4006e0bb493bd1ae993c489cd76e02c"
	I0127 12:36:54.227269    9948 logs.go:123] Gathering logs for kube-apiserver [ea993630a310] ...
	I0127 12:36:54.227269    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea993630a310"
	I0127 12:36:54.256885    9948 command_runner.go:130] ! W0127 12:35:38.851605       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0127 12:36:54.257701    9948 command_runner.go:130] ! I0127 12:35:38.853397       1 options.go:238] external host was not specified, using 172.29.198.106
	I0127 12:36:54.257701    9948 command_runner.go:130] ! I0127 12:35:38.858160       1 server.go:143] Version: v1.32.1
	I0127 12:36:54.257852    9948 command_runner.go:130] ! I0127 12:35:38.858493       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:54.257852    9948 command_runner.go:130] ! I0127 12:35:39.798695       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0127 12:36:54.257932    9948 command_runner.go:130] ! I0127 12:35:39.843688       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0127 12:36:54.258085    9948 command_runner.go:130] ! I0127 12:35:39.853521       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0127 12:36:54.258113    9948 command_runner.go:130] ! I0127 12:35:39.853736       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0127 12:36:54.258113    9948 command_runner.go:130] ! I0127 12:35:39.854572       1 instance.go:233] Using reconciler: lease
	I0127 12:36:54.258113    9948 command_runner.go:130] ! I0127 12:35:39.914509       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0127 12:36:54.258113    9948 command_runner.go:130] ! W0127 12:35:39.914792       1 genericapiserver.go:767] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.258113    9948 command_runner.go:130] ! I0127 12:35:40.232206       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0127 12:36:54.258113    9948 command_runner.go:130] ! I0127 12:35:40.232893       1 apis.go:106] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0127 12:36:54.258113    9948 command_runner.go:130] ! I0127 12:35:40.488401       1 apis.go:106] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0127 12:36:54.258113    9948 command_runner.go:130] ! I0127 12:35:40.610998       1 apis.go:106] API group "resource.k8s.io" is not enabled, skipping.
	I0127 12:36:54.258113    9948 command_runner.go:130] ! I0127 12:35:40.646097       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0127 12:36:54.258113    9948 command_runner.go:130] ! W0127 12:35:40.646401       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.258113    9948 command_runner.go:130] ! W0127 12:35:40.646556       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:54.258651    9948 command_runner.go:130] ! I0127 12:35:40.647499       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0127 12:36:54.258651    9948 command_runner.go:130] ! W0127 12:35:40.647580       1 genericapiserver.go:767] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.258651    9948 command_runner.go:130] ! I0127 12:35:40.648520       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0127 12:36:54.258697    9948 command_runner.go:130] ! I0127 12:35:40.649666       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0127 12:36:54.258697    9948 command_runner.go:130] ! W0127 12:35:40.649756       1 genericapiserver.go:767] Skipping API autoscaling/v2beta1 because it has no resources.
	I0127 12:36:54.258697    9948 command_runner.go:130] ! W0127 12:35:40.649766       1 genericapiserver.go:767] Skipping API autoscaling/v2beta2 because it has no resources.
	I0127 12:36:54.258745    9948 command_runner.go:130] ! I0127 12:35:40.651998       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0127 12:36:54.258745    9948 command_runner.go:130] ! W0127 12:35:40.652100       1 genericapiserver.go:767] Skipping API batch/v1beta1 because it has no resources.
	I0127 12:36:54.258745    9948 command_runner.go:130] ! I0127 12:35:40.653327       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0127 12:36:54.258792    9948 command_runner.go:130] ! W0127 12:35:40.653629       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.653645       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! I0127 12:35:40.654270       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.654362       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.654371       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1alpha2 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! I0127 12:35:40.655349       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.655494       1 genericapiserver.go:767] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! I0127 12:35:40.657969       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.658067       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.658077       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! I0127 12:35:40.658845       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.658940       1 genericapiserver.go:767] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.658951       1 genericapiserver.go:767] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! I0127 12:35:40.660043       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.660172       1 genericapiserver.go:767] Skipping API policy/v1beta1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! I0127 12:35:40.662431       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.662519       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.662531       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! I0127 12:35:40.663022       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.663153       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.663165       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! I0127 12:35:40.666344       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.666495       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.666521       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! I0127 12:35:40.668345       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.668516       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta3 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.668527       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.668531       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! I0127 12:35:40.673502       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.673587       1 genericapiserver.go:767] Skipping API apps/v1beta2 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.673597       1 genericapiserver.go:767] Skipping API apps/v1beta1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! I0127 12:35:40.676193       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.676284       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.676294       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! I0127 12:35:40.677186       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.677276       1 genericapiserver.go:767] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! I0127 12:35:40.688978       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.689072       1 genericapiserver.go:767] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.259365    9948 command_runner.go:130] ! I0127 12:35:41.320439       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 12:36:54.259365    9948 command_runner.go:130] ! I0127 12:35:41.320849       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:54.259441    9948 command_runner.go:130] ! I0127 12:35:41.321234       1 secure_serving.go:213] Serving securely on [::]:8443
	I0127 12:36:54.259441    9948 command_runner.go:130] ! I0127 12:35:41.321512       1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0127 12:36:54.259441    9948 command_runner.go:130] ! I0127 12:35:41.324372       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:54.259441    9948 command_runner.go:130] ! I0127 12:35:41.325924       1 controller.go:119] Starting legacy_token_tracking_controller
	I0127 12:36:54.259542    9948 command_runner.go:130] ! I0127 12:35:41.326193       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0127 12:36:54.259542    9948 command_runner.go:130] ! I0127 12:35:41.327573       1 remote_available_controller.go:411] Starting RemoteAvailability controller
	I0127 12:36:54.259542    9948 command_runner.go:130] ! I0127 12:35:41.328217       1 cache.go:32] Waiting for caches to sync for RemoteAvailability controller
	I0127 12:36:54.259542    9948 command_runner.go:130] ! I0127 12:35:41.328319       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.329060       1 cluster_authentication_trust_controller.go:462] Starting cluster_authentication_trust_controller controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.329095       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.329225       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.329996       1 controller.go:78] Starting OpenAPI AggregationController
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.330057       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.330085       1 apf_controller.go:377] Starting API Priority and Fairness config controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.330333       1 apiservice_controller.go:100] Starting APIServiceRegistrationController
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.330379       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.331391       1 customresource_discovery_controller.go:292] Starting DiscoveryController
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.331485       1 dynamic_serving_content.go:135] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.327929       1 local_available_controller.go:156] Starting LocalAvailability controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.333671       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.333703       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.333958       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.335863       1 controller.go:142] Starting OpenAPI controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.336704       1 controller.go:90] Starting OpenAPI V3 controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.336831       1 naming_controller.go:294] Starting NamingConditionController
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.337057       1 establishing_controller.go:81] Starting EstablishingController
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.337215       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.337324       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.337408       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.327968       1 aggregator.go:169] waiting for initial CRD sync...
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.387084       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.387441       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.450926       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.451366       1 policy_source.go:240] refreshing policies
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.488750       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.488990       1 aggregator.go:171] initial CRD sync complete...
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.489245       1 autoregister_controller.go:144] Starting autoregister controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.489480       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.489653       1 cache.go:39] Caches are synced for autoregister controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.499151       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.527390       1 shared_informer.go:320] Caches are synced for configmaps
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.528625       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.529892       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.530639       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.531604       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0127 12:36:54.260117    9948 command_runner.go:130] ! I0127 12:35:41.531638       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0127 12:36:54.260117    9948 command_runner.go:130] ! I0127 12:35:41.534721       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0127 12:36:54.260174    9948 command_runner.go:130] ! I0127 12:35:41.540933       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0127 12:36:54.260174    9948 command_runner.go:130] ! I0127 12:35:41.545944       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0127 12:36:54.260174    9948 command_runner.go:130] ! I0127 12:35:42.357869       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0127 12:36:54.260174    9948 command_runner.go:130] ! I0127 12:35:42.374307       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0127 12:36:54.260174    9948 command_runner.go:130] ! W0127 12:35:43.074223       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.29.198.106]
	I0127 12:36:54.260258    9948 command_runner.go:130] ! I0127 12:35:43.075938       1 controller.go:615] quota admission added evaluator for: endpoints
	I0127 12:36:54.260258    9948 command_runner.go:130] ! I0127 12:35:43.085006       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0127 12:36:54.260258    9948 command_runner.go:130] ! I0127 12:35:44.603084       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0127 12:36:54.260258    9948 command_runner.go:130] ! I0127 12:35:44.989601       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0127 12:36:54.260258    9948 command_runner.go:130] ! I0127 12:35:45.141450       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0127 12:36:54.260258    9948 command_runner.go:130] ! I0127 12:35:45.327075       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 12:36:54.260258    9948 command_runner.go:130] ! I0127 12:35:45.338333       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 12:36:54.267730    9948 logs.go:123] Gathering logs for etcd [0ef2a3b50bae] ...
	I0127 12:36:54.267790    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ef2a3b50bae"
	I0127 12:36:54.292327    9948 command_runner.go:130] ! {"level":"warn","ts":"2025-01-27T12:35:38.248296Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0127 12:36:54.292475    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.248523Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.29.198.106:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.29.198.106:2380","--initial-cluster=multinode-659000=https://172.29.198.106:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.29.198.106:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.29.198.106:2380","--name=multinode-659000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0127 12:36:54.292475    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.249804Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0127 12:36:54.292559    9948 command_runner.go:130] ! {"level":"warn","ts":"2025-01-27T12:35:38.249933Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0127 12:36:54.292559    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.249951Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://172.29.198.106:2380"]}
	I0127 12:36:54.292648    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.250358Z","caller":"embed/etcd.go:497","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0127 12:36:54.292648    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.255871Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.29.198.106:2379"]}
	I0127 12:36:54.292793    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.258341Z","caller":"embed/etcd.go:311","msg":"starting an etcd server","etcd-version":"3.5.16","git-sha":"f20bbad","go-version":"go1.22.7","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-659000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.29.198.106:2380"],"listen-peer-urls":["https://172.29.198.106:2380"],"advertise-client-urls":["https://172.29.198.106:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.198.106:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initi
al-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0127 12:36:54.292868    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.282453Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"23.428079ms"}
	I0127 12:36:54.292868    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.322950Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0127 12:36:54.292938    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.352706Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"d020e240c474bd89","local-member-id":"925e6945be3a5b5b","commit-index":2090}
	I0127 12:36:54.292938    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.354002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b switched to configuration voters=()"}
	I0127 12:36:54.292938    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.354079Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b became follower at term 2"}
	I0127 12:36:54.292938    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.354103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 925e6945be3a5b5b [peers: [], term: 2, commit: 2090, applied: 0, lastindex: 2090, lastterm: 2]"}
	I0127 12:36:54.292938    9948 command_runner.go:130] ! {"level":"warn","ts":"2025-01-27T12:35:38.367343Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0127 12:36:54.292938    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.371532Z","caller":"mvcc/kvstore.go:346","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1388}
	I0127 12:36:54.292938    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.377112Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":1808}
	I0127 12:36:54.292938    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.386775Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0127 12:36:54.293168    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.395908Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"925e6945be3a5b5b","timeout":"7s"}
	I0127 12:36:54.293168    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.396497Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"925e6945be3a5b5b"}
	I0127 12:36:54.293168    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.396684Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"925e6945be3a5b5b","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	I0127 12:36:54.293234    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.396970Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	I0127 12:36:54.293234    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.399309Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0127 12:36:54.293374    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.401105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b switched to configuration voters=(10546983125613435739)"}
	I0127 12:36:54.293446    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.400045Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0127 12:36:54.293446    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.404834Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0127 12:36:54.293533    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.404888Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0127 12:36:54.293595    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.405566Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d020e240c474bd89","local-member-id":"925e6945be3a5b5b","added-peer-id":"925e6945be3a5b5b","added-peer-peer-urls":["https://172.29.204.17:2380"]}
	I0127 12:36:54.293595    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.405716Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d020e240c474bd89","local-member-id":"925e6945be3a5b5b","cluster-version":"3.5"}
	I0127 12:36:54.293595    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.405754Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0127 12:36:54.293680    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.407643Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0127 12:36:54.293747    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.408091Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"925e6945be3a5b5b","initial-advertise-peer-urls":["https://172.29.198.106:2380"],"listen-peer-urls":["https://172.29.198.106:2380"],"advertise-client-urls":["https://172.29.198.106:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.198.106:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0127 12:36:54.293747    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.408386Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0127 12:36:54.293875    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.408686Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"172.29.198.106:2380"}
	I0127 12:36:54.293875    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.408809Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"172.29.198.106:2380"}
	I0127 12:36:54.293927    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.355207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b is starting a new election at term 2"}
	I0127 12:36:54.293927    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.355615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b became pre-candidate at term 2"}
	I0127 12:36:54.293962    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.355770Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b received MsgPreVoteResp from 925e6945be3a5b5b at term 2"}
	I0127 12:36:54.293962    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.355926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b became candidate at term 3"}
	I0127 12:36:54.293962    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.356088Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b received MsgVoteResp from 925e6945be3a5b5b at term 3"}
	I0127 12:36:54.293962    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.356235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b became leader at term 3"}
	I0127 12:36:54.294034    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.356449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 925e6945be3a5b5b elected leader 925e6945be3a5b5b at term 3"}
	I0127 12:36:54.294034    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.368540Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"925e6945be3a5b5b","local-member-attributes":"{Name:multinode-659000 ClientURLs:[https://172.29.198.106:2379]}","request-path":"/0/members/925e6945be3a5b5b/attributes","cluster-id":"d020e240c474bd89","publish-timeout":"7s"}
	I0127 12:36:54.294034    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.369045Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0127 12:36:54.294093    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.371833Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0127 12:36:54.294093    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.372238Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0127 12:36:54.294093    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.374158Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0127 12:36:54.294147    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.383680Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0127 12:36:54.294147    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.391404Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0127 12:36:54.294192    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.392982Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.29.198.106:2379"}
	I0127 12:36:54.294215    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.399505Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0127 12:36:54.302336    9948 logs.go:123] Gathering logs for kube-scheduler [ed51c7eaa966] ...
	I0127 12:36:54.302336    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed51c7eaa966"
	I0127 12:36:54.329330    9948 command_runner.go:130] ! I0127 12:35:39.285954       1 serving.go:386] Generated self-signed cert in-memory
	I0127 12:36:54.329912    9948 command_runner.go:130] ! W0127 12:35:41.361191       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0127 12:36:54.329912    9948 command_runner.go:130] ! W0127 12:35:41.363231       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0127 12:36:54.329986    9948 command_runner.go:130] ! W0127 12:35:41.363467       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0127 12:36:54.330006    9948 command_runner.go:130] ! W0127 12:35:41.363598       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 12:36:54.330006    9948 command_runner.go:130] ! I0127 12:35:41.458309       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 12:36:54.330105    9948 command_runner.go:130] ! I0127 12:35:41.458594       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:54.330132    9948 command_runner.go:130] ! I0127 12:35:41.465036       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 12:36:54.330132    9948 command_runner.go:130] ! I0127 12:35:41.465587       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0127 12:36:54.330132    9948 command_runner.go:130] ! I0127 12:35:41.466480       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:36:54.330132    9948 command_runner.go:130] ! I0127 12:35:41.466554       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:54.330132    9948 command_runner.go:130] ! I0127 12:35:41.567642       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:36:54.332685    9948 logs.go:123] Gathering logs for kube-scheduler [a16e06a03860] ...
	I0127 12:36:54.332685    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a16e06a03860"
	I0127 12:36:54.365785    9948 command_runner.go:130] ! I0127 12:11:54.280431       1 serving.go:386] Generated self-signed cert in-memory
	I0127 12:36:54.365869    9948 command_runner.go:130] ! W0127 12:11:55.581187       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0127 12:36:54.365869    9948 command_runner.go:130] ! W0127 12:11:55.581309       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0127 12:36:54.365997    9948 command_runner.go:130] ! W0127 12:11:55.581382       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0127 12:36:54.365997    9948 command_runner.go:130] ! W0127 12:11:55.581390       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 12:36:54.365997    9948 command_runner.go:130] ! I0127 12:11:55.694969       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 12:36:54.365997    9948 command_runner.go:130] ! I0127 12:11:55.695193       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:54.365997    9948 command_runner.go:130] ! I0127 12:11:55.700077       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0127 12:36:54.365997    9948 command_runner.go:130] ! I0127 12:11:55.700446       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 12:36:54.365997    9948 command_runner.go:130] ! I0127 12:11:55.700992       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:36:54.365997    9948 command_runner.go:130] ! I0127 12:11:55.701410       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:54.365997    9948 command_runner.go:130] ! W0127 12:11:55.715521       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:54.365997    9948 command_runner.go:130] ! E0127 12:11:55.717196       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.365997    9948 command_runner.go:130] ! W0127 12:11:55.717649       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0127 12:36:54.365997    9948 command_runner.go:130] ! E0127 12:11:55.717921       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.365997    9948 command_runner.go:130] ! W0127 12:11:55.718583       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0127 12:36:54.365997    9948 command_runner.go:130] ! E0127 12:11:55.718820       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.365997    9948 command_runner.go:130] ! W0127 12:11:55.728298       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0127 12:36:54.365997    9948 command_runner.go:130] ! E0127 12:11:55.728648       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0127 12:36:54.365997    9948 command_runner.go:130] ! W0127 12:11:55.729000       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0127 12:36:54.365997    9948 command_runner.go:130] ! E0127 12:11:55.729243       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.365997    9948 command_runner.go:130] ! W0127 12:11:55.729633       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0127 12:36:54.365997    9948 command_runner.go:130] ! E0127 12:11:55.730380       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.365997    9948 command_runner.go:130] ! W0127 12:11:55.729677       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0127 12:36:54.366530    9948 command_runner.go:130] ! E0127 12:11:55.730837       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.366585    9948 command_runner.go:130] ! W0127 12:11:55.729713       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0127 12:36:54.366585    9948 command_runner.go:130] ! W0127 12:11:55.729749       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:54.366585    9948 command_runner.go:130] ! E0127 12:11:55.731479       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.366696    9948 command_runner.go:130] ! W0127 12:11:55.729782       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:54.366723    9948 command_runner.go:130] ! E0127 12:11:55.732242       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.366723    9948 command_runner.go:130] ! W0127 12:11:55.729811       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:54.366723    9948 command_runner.go:130] ! E0127 12:11:55.734240       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.366723    9948 command_runner.go:130] ! E0127 12:11:55.734704       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.366723    9948 command_runner.go:130] ! W0127 12:11:55.738077       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0127 12:36:54.366723    9948 command_runner.go:130] ! E0127 12:11:55.738873       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.366723    9948 command_runner.go:130] ! W0127 12:11:55.739202       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0127 12:36:54.366723    9948 command_runner.go:130] ! E0127 12:11:55.739366       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.366723    9948 command_runner.go:130] ! W0127 12:11:55.739719       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0127 12:36:54.366723    9948 command_runner.go:130] ! E0127 12:11:55.739865       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.366723    9948 command_runner.go:130] ! W0127 12:11:55.740221       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0127 12:36:54.366723    9948 command_runner.go:130] ! E0127 12:11:55.740378       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.366723    9948 command_runner.go:130] ! W0127 12:11:55.740608       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:54.366723    9948 command_runner.go:130] ! E0127 12:11:55.740761       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.366723    9948 command_runner.go:130] ! W0127 12:11:56.556598       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0127 12:36:54.366723    9948 command_runner.go:130] ! E0127 12:11:56.557622       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.367252    9948 command_runner.go:130] ! W0127 12:11:56.595830       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:54.367367    9948 command_runner.go:130] ! E0127 12:11:56.596047       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.367367    9948 command_runner.go:130] ! W0127 12:11:56.691826       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0127 12:36:54.367452    9948 command_runner.go:130] ! E0127 12:11:56.691909       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0127 12:36:54.367545    9948 command_runner.go:130] ! W0127 12:11:56.806048       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:54.367545    9948 command_runner.go:130] ! E0127 12:11:56.806109       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.367608    9948 command_runner.go:130] ! W0127 12:11:56.846817       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0127 12:36:54.367631    9948 command_runner.go:130] ! E0127 12:11:56.847194       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.367690    9948 command_runner.go:130] ! W0127 12:11:56.871314       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0127 12:36:54.367725    9948 command_runner.go:130] ! E0127 12:11:56.872178       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.367725    9948 command_runner.go:130] ! W0127 12:11:56.887386       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0127 12:36:54.367797    9948 command_runner.go:130] ! E0127 12:11:56.887549       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.367797    9948 command_runner.go:130] ! W0127 12:11:56.918642       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0127 12:36:54.367897    9948 command_runner.go:130] ! E0127 12:11:56.919135       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.367897    9948 command_runner.go:130] ! W0127 12:11:57.039216       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:54.367897    9948 command_runner.go:130] ! E0127 12:11:57.039707       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.367897    9948 command_runner.go:130] ! W0127 12:11:57.055169       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0127 12:36:54.367897    9948 command_runner.go:130] ! E0127 12:11:57.055233       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.367897    9948 command_runner.go:130] ! W0127 12:11:57.106656       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0127 12:36:54.367897    9948 command_runner.go:130] ! E0127 12:11:57.106828       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.367897    9948 command_runner.go:130] ! W0127 12:11:57.214186       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:54.367897    9948 command_runner.go:130] ! E0127 12:11:57.214290       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.367897    9948 command_runner.go:130] ! W0127 12:11:57.298150       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0127 12:36:54.367897    9948 command_runner.go:130] ! E0127 12:11:57.298337       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.367897    9948 command_runner.go:130] ! W0127 12:11:57.310098       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0127 12:36:54.367897    9948 command_runner.go:130] ! E0127 12:11:57.310312       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.367897    9948 command_runner.go:130] ! W0127 12:11:57.312117       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:54.367897    9948 command_runner.go:130] ! E0127 12:11:57.312192       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.368419    9948 command_runner.go:130] ! W0127 12:11:57.321525       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0127 12:36:54.368460    9948 command_runner.go:130] ! E0127 12:11:57.321832       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.368460    9948 command_runner.go:130] ! I0127 12:11:59.701790       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:36:54.368513    9948 command_runner.go:130] ! I0127 12:33:15.443053       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0127 12:36:54.368513    9948 command_runner.go:130] ! I0127 12:33:15.443143       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0127 12:36:54.368513    9948 command_runner.go:130] ! I0127 12:33:15.452458       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 12:36:54.368513    9948 command_runner.go:130] ! E0127 12:33:15.487412       1 run.go:72] "command failed" err="finished without leader elect"
	I0127 12:36:54.379298    9948 logs.go:123] Gathering logs for kube-controller-manager [e07a66f8f619] ...
	I0127 12:36:54.379298    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e07a66f8f619"
	I0127 12:36:54.405309    9948 command_runner.go:130] ! I0127 12:11:53.668834       1 serving.go:386] Generated self-signed cert in-memory
	I0127 12:36:54.405309    9948 command_runner.go:130] ! I0127 12:11:53.986868       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0127 12:36:54.405309    9948 command_runner.go:130] ! I0127 12:11:53.987309       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:54.405309    9948 command_runner.go:130] ! I0127 12:11:53.989401       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0127 12:36:54.405309    9948 command_runner.go:130] ! I0127 12:11:53.990012       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:54.405309    9948 command_runner.go:130] ! I0127 12:11:53.990187       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:53.990322       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.581695       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.581741       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.615284       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.615497       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.615545       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.626456       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.626896       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.626952       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.636784       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.636866       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.637077       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.637108       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.649619       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.649750       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.649765       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.650223       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.650457       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.682646       1 shared_informer.go:320] Caches are synced for tokens
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.684061       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.684098       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.698781       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.699001       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.699050       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.699060       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.720187       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.720450       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.725202       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.736652       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.737667       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.738017       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.758863       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.759137       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:58.759589       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:58.759751       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:58.778737       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:58.779301       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:58.794263       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:58.805098       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:58.805155       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:58.805917       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:58.889766       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:58.889864       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:58.889880       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.169736       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.169792       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.169804       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.292507       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.292665       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.292680       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.451231       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.451328       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.451387       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.451649       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.594702       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.594829       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.595498       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.595889       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.744969       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.745617       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.745871       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.892444       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.892907       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.893093       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.136328       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.136634       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.136654       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.136681       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.425858       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.426027       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.426047       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.426160       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.426327       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.426356       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.685414       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.685471       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.685482       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.841490       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.841888       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.841953       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.888027       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.888135       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.888174       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.889767       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.889893       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.889957       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.890020       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.890047       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.890072       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.890079       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.890101       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.890256       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.890391       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.042988       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.043513       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.043602       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.043761       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0127 12:36:54.408329    9948 command_runner.go:130] ! W0127 12:12:01.189051       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.192613       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.192663       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.193062       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.193147       1 shared_informer.go:313] Waiting for caches to sync for node
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.493812       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.493885       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.493919       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494208       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494371       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494391       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494413       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494456       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494473       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494487       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494531       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494547       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494617       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494687       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494717       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494749       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494763       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494781       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494815       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494890       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.495196       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.495268       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.495404       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.495519       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.640900       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.641423       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.641492       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.789671       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.790209       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.790224       1 shared_informer.go:313] Waiting for caches to sync for job
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.939873       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.940295       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.940375       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:02.099155       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:02.099654       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:02.099741       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:02.240427       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:02.240688       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:02.240725       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:02.390343       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.390438       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.390450       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.539643       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.539766       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.539778       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.691835       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.691969       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.739108       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.739143       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.739157       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.739400       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.739775       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.740069       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.890126       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.890235       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.890247       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.040125       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.040770       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.040983       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.063768       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.092877       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.093448       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.110720       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000\" does not exist"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.126986       1 shared_informer.go:320] Caches are synced for endpoint
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.127087       1 shared_informer.go:320] Caches are synced for taint
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.127203       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.127313       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.127524       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.137503       1 shared_informer.go:320] Caches are synced for service account
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.137554       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.138208       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.138217       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.138352       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.141127       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.141405       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.141415       1 shared_informer.go:320] Caches are synced for TTL
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.141424       1 shared_informer.go:320] Caches are synced for ephemeral
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.141607       1 shared_informer.go:320] Caches are synced for daemon sets
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.141617       1 shared_informer.go:320] Caches are synced for stateful set
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.142442       1 shared_informer.go:320] Caches are synced for cronjob
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.146511       1 shared_informer.go:320] Caches are synced for persistent volume
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.150765       1 shared_informer.go:320] Caches are synced for expand
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.152122       1 shared_informer.go:320] Caches are synced for PVC protection
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.160180       1 shared_informer.go:320] Caches are synced for GC
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.164570       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.170520       1 shared_informer.go:320] Caches are synced for namespace
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.185040       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.186131       1 shared_informer.go:320] Caches are synced for HPA
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.188683       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.191196       1 shared_informer.go:320] Caches are synced for crt configmap
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.192089       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.192497       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.192682       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.192862       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.193013       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.193030       1 shared_informer.go:320] Caches are synced for job
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.193151       1 shared_informer.go:320] Caches are synced for deployment
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.193982       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.194157       1 shared_informer.go:320] Caches are synced for node
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.194244       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.194281       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.194310       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.194318       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.194846       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.196614       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.197111       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.197095       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.199168       1 shared_informer.go:320] Caches are synced for disruption
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.200153       1 shared_informer.go:320] Caches are synced for attach detach
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.207229       1 shared_informer.go:320] Caches are synced for PV protection
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.214016       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-659000" podCIDRs=["10.244.0.0/24"]
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.214057       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.214083       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.216325       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.840748       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:04.356274       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="345.711056ms"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:04.454747       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="97.841105ms"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:04.534437       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="79.56576ms"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:04.576528       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="41.959673ms"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:04.576771       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="53.3µs"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:26.045035       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:26.074083       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:26.085407       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="109.3µs"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:26.129584       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="119.3µs"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:27.964629       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="49.302µs"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:28.020606       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="31.923176ms"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:28.020971       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="110.703µs"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:28.132341       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:29.790464       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:07.611410       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m02\" does not exist"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:07.630009       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-659000-m02" podCIDRs=["10.244.1.0/24"]
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:07.631297       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:07.631526       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:07.655401       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:07.883346       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:08.169505       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000-m02"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:08.255644       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:08.418223       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:17.811768       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:36.752543       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:36.753915       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:36.769807       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:38.199464       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:38.449749       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:16:02.550786       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="103.313802ms"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:16:02.585867       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="34.67067ms"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:16:02.586257       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="347.903µs"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:16:02.588870       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="48.6µs"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:16:05.434486       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="13.589639ms"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:16:05.435765       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="54.401µs"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:16:05.890170       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="9.003392ms"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:16:05.890477       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="36.901µs"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:16:09.305780       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:16:33.434322       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:19:26.820887       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:19:54.916460       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:19:54.917420       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m03\" does not exist"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:19:54.965530       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-659000-m03" podCIDRs=["10.244.2.0/24"]
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:19:54.966061       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:19:54.966297       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:19:55.802981       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:19:56.378698       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:19:58.252320       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:19:58.280410       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:20:05.560777       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:20:25.959831       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:20:28.750598       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:20:28.751325       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:20:28.769163       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:20:33.279397       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:23:26.795899       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:24:32.956118       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:25:42.001288       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:28:32.628178       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:28:38.397672       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:28:38.399092       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:28:38.428451       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:28:43.510900       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:29:38.000555       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:30:52.866288       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:30:52.895359       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:30:58.140304       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:31:04.208510       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:31:04.209007       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m03\" does not exist"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:31:04.238560       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-659000-m03" podCIDRs=["10.244.3.0/24"]
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:31:04.238634       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! E0127 12:31:04.255963       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-659000-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-659000-m03" podCIDRs=["10.244.4.0/24"]
	I0127 12:36:54.411307    9948 command_runner.go:130] ! E0127 12:31:04.256068       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-659000-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! E0127 12:31:04.256109       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'multinode-659000-m03': failed to patch node CIDR: Node \"multinode-659000-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:31:04.256134       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:31:04.261242       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:31:04.513319       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:31:05.081710       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:31:08.523576       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:31:14.394811       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:31:22.407069       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:31:22.407472       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:31:22.419743       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:31:23.498434       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:33:08.544063       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:33:08.544656       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:33:08.574301       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.412303    9948 command_runner.go:130] ! I0127 12:33:13.661256       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.431320    9948 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:36:54.431320    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 12:36:54.599259    9948 command_runner.go:130] > Name:               multinode-659000
	I0127 12:36:54.599259    9948 command_runner.go:130] > Roles:              control-plane
	I0127 12:36:54.599259    9948 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0127 12:36:54.599259    9948 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0127 12:36:54.599259    9948 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0127 12:36:54.599259    9948 command_runner.go:130] >                     kubernetes.io/hostname=multinode-659000
	I0127 12:36:54.599259    9948 command_runner.go:130] >                     kubernetes.io/os=linux
	I0127 12:36:54.599259    9948 command_runner.go:130] >                     minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	I0127 12:36:54.599259    9948 command_runner.go:130] >                     minikube.k8s.io/name=multinode-659000
	I0127 12:36:54.599259    9948 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0127 12:36:54.599259    9948 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_01_27T12_12_00_0700
	I0127 12:36:54.599259    9948 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0127 12:36:54.599259    9948 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0127 12:36:54.599259    9948 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0127 12:36:54.599259    9948 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0127 12:36:54.599259    9948 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0127 12:36:54.599259    9948 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0127 12:36:54.599259    9948 command_runner.go:130] > CreationTimestamp:  Mon, 27 Jan 2025 12:11:55 +0000
	I0127 12:36:54.599259    9948 command_runner.go:130] > Taints:             <none>
	I0127 12:36:54.599259    9948 command_runner.go:130] > Unschedulable:      false
	I0127 12:36:54.599259    9948 command_runner.go:130] > Lease:
	I0127 12:36:54.599259    9948 command_runner.go:130] >   HolderIdentity:  multinode-659000
	I0127 12:36:54.599259    9948 command_runner.go:130] >   AcquireTime:     <unset>
	I0127 12:36:54.599259    9948 command_runner.go:130] >   RenewTime:       Mon, 27 Jan 2025 12:36:52 +0000
	I0127 12:36:54.599259    9948 command_runner.go:130] > Conditions:
	I0127 12:36:54.599259    9948 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0127 12:36:54.599259    9948 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0127 12:36:54.599259    9948 command_runner.go:130] >   MemoryPressure   False   Mon, 27 Jan 2025 12:36:32 +0000   Mon, 27 Jan 2025 12:11:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0127 12:36:54.599259    9948 command_runner.go:130] >   DiskPressure     False   Mon, 27 Jan 2025 12:36:32 +0000   Mon, 27 Jan 2025 12:11:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0127 12:36:54.599259    9948 command_runner.go:130] >   PIDPressure      False   Mon, 27 Jan 2025 12:36:32 +0000   Mon, 27 Jan 2025 12:11:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Ready            True    Mon, 27 Jan 2025 12:36:32 +0000   Mon, 27 Jan 2025 12:36:32 +0000   KubeletReady                 kubelet is posting ready status
	I0127 12:36:54.600252    9948 command_runner.go:130] > Addresses:
	I0127 12:36:54.600252    9948 command_runner.go:130] >   InternalIP:  172.29.198.106
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Hostname:    multinode-659000
	I0127 12:36:54.600252    9948 command_runner.go:130] > Capacity:
	I0127 12:36:54.600252    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:54.600252    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:54.600252    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:54.600252    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:54.600252    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:54.600252    9948 command_runner.go:130] > Allocatable:
	I0127 12:36:54.600252    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:54.600252    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:54.600252    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:54.600252    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:54.600252    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:54.600252    9948 command_runner.go:130] > System Info:
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Machine ID:                 312902fc96b948148d51eecf097c4a9d
	I0127 12:36:54.600252    9948 command_runner.go:130] >   System UUID:                be6234aa-9e29-bb41-8165-59b265a4d7d0
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Boot ID:                    058425a5-0652-4c5c-a517-2369b8cac13d
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Kernel Version:             5.10.207
	I0127 12:36:54.600252    9948 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Operating System:           linux
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Architecture:               amd64
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0127 12:36:54.600252    9948 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0127 12:36:54.600252    9948 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0127 12:36:54.600252    9948 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0127 12:36:54.600252    9948 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0127 12:36:54.600252    9948 command_runner.go:130] >   default                     busybox-58667487b6-2jq9j                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0127 12:36:54.600252    9948 command_runner.go:130] >   kube-system                 coredns-668d6bf9bc-2qw6w                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     24m
	I0127 12:36:54.600252    9948 command_runner.go:130] >   kube-system                 etcd-multinode-659000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         72s
	I0127 12:36:54.600252    9948 command_runner.go:130] >   kube-system                 kindnet-z2hqq                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	I0127 12:36:54.600252    9948 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-659000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         72s
	I0127 12:36:54.600252    9948 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-659000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0127 12:36:54.600252    9948 command_runner.go:130] >   kube-system                 kube-proxy-s46mv                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0127 12:36:54.600252    9948 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-659000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0127 12:36:54.600252    9948 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0127 12:36:54.600252    9948 command_runner.go:130] > Allocated resources:
	I0127 12:36:54.600252    9948 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Resource           Requests     Limits
	I0127 12:36:54.600252    9948 command_runner.go:130] >   --------           --------     ------
	I0127 12:36:54.600252    9948 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0127 12:36:54.600252    9948 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0127 12:36:54.600252    9948 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0127 12:36:54.600252    9948 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0127 12:36:54.600252    9948 command_runner.go:130] > Events:
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Type     Reason                   Age                From             Message
	I0127 12:36:54.600252    9948 command_runner.go:130] >   ----     ------                   ----               ----             -------
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   Starting                 24m                kube-proxy       
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   Starting                 69s                kube-proxy       
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   Starting                 25m                kubelet          Starting kubelet.
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   NodeHasSufficientMemory  25m (x8 over 25m)  kubelet          Node multinode-659000 status is now: NodeHasSufficientMemory
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    25m (x8 over 25m)  kubelet          Node multinode-659000 status is now: NodeHasNoDiskPressure
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   NodeHasSufficientPID     25m (x7 over 25m)  kubelet          Node multinode-659000 status is now: NodeHasSufficientPID
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    24m                kubelet          Node multinode-659000 status is now: NodeHasNoDiskPressure
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   NodeHasSufficientMemory  24m                kubelet          Node multinode-659000 status is now: NodeHasSufficientMemory
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   NodeHasSufficientPID     24m                kubelet          Node multinode-659000 status is now: NodeHasSufficientPID
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   Starting                 24m                kubelet          Starting kubelet.
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   RegisteredNode           24m                node-controller  Node multinode-659000 event: Registered Node multinode-659000 in Controller
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   NodeReady                24m                kubelet          Node multinode-659000 status is now: NodeReady
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   Starting                 78s                kubelet          Starting kubelet.
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   NodeHasSufficientMemory  78s (x8 over 78s)  kubelet          Node multinode-659000 status is now: NodeHasSufficientMemory
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    78s (x8 over 78s)  kubelet          Node multinode-659000 status is now: NodeHasNoDiskPressure
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   NodeHasSufficientPID     78s (x7 over 78s)  kubelet          Node multinode-659000 status is now: NodeHasSufficientPID
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Warning  Rebooted                 73s                kubelet          Node multinode-659000 has been rebooted, boot id: 058425a5-0652-4c5c-a517-2369b8cac13d
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   RegisteredNode           70s                node-controller  Node multinode-659000 event: Registered Node multinode-659000 in Controller
	I0127 12:36:54.600252    9948 command_runner.go:130] > Name:               multinode-659000-m02
	I0127 12:36:54.600252    9948 command_runner.go:130] > Roles:              <none>
	I0127 12:36:54.600252    9948 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0127 12:36:54.600252    9948 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0127 12:36:54.600252    9948 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     kubernetes.io/hostname=multinode-659000-m02
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     kubernetes.io/os=linux
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     minikube.k8s.io/name=multinode-659000
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_01_27T12_15_08_0700
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0127 12:36:54.601251    9948 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0127 12:36:54.601251    9948 command_runner.go:130] > CreationTimestamp:  Mon, 27 Jan 2025 12:15:07 +0000
	I0127 12:36:54.601251    9948 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0127 12:36:54.601251    9948 command_runner.go:130] > Unschedulable:      false
	I0127 12:36:54.601251    9948 command_runner.go:130] > Lease:
	I0127 12:36:54.601251    9948 command_runner.go:130] >   HolderIdentity:  multinode-659000-m02
	I0127 12:36:54.601251    9948 command_runner.go:130] >   AcquireTime:     <unset>
	I0127 12:36:54.601251    9948 command_runner.go:130] >   RenewTime:       Mon, 27 Jan 2025 12:32:39 +0000
	I0127 12:36:54.601251    9948 command_runner.go:130] > Conditions:
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0127 12:36:54.601251    9948 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0127 12:36:54.601251    9948 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 27 Jan 2025 12:28:32 +0000   Mon, 27 Jan 2025 12:36:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:54.601251    9948 command_runner.go:130] >   DiskPressure     Unknown   Mon, 27 Jan 2025 12:28:32 +0000   Mon, 27 Jan 2025 12:36:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:54.601251    9948 command_runner.go:130] >   PIDPressure      Unknown   Mon, 27 Jan 2025 12:28:32 +0000   Mon, 27 Jan 2025 12:36:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Ready            Unknown   Mon, 27 Jan 2025 12:28:32 +0000   Mon, 27 Jan 2025 12:36:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:54.601251    9948 command_runner.go:130] > Addresses:
	I0127 12:36:54.601251    9948 command_runner.go:130] >   InternalIP:  172.29.199.129
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Hostname:    multinode-659000-m02
	I0127 12:36:54.601251    9948 command_runner.go:130] > Capacity:
	I0127 12:36:54.601251    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:54.601251    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:54.601251    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:54.601251    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:54.601251    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:54.601251    9948 command_runner.go:130] > Allocatable:
	I0127 12:36:54.601251    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:54.601251    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:54.601251    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:54.601251    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:54.601251    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:54.601251    9948 command_runner.go:130] > System Info:
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Machine ID:                 30ce15ff72904b54b07c49f3e2f28802
	I0127 12:36:54.601251    9948 command_runner.go:130] >   System UUID:                b6923799-fa1e-b54c-9340-50dd6a2378f5
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Boot ID:                    3308d183-ec79-4aeb-9d90-80d47cdbff63
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Kernel Version:             5.10.207
	I0127 12:36:54.601251    9948 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Operating System:           linux
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Architecture:               amd64
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0127 12:36:54.601251    9948 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0127 12:36:54.601251    9948 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0127 12:36:54.601251    9948 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0127 12:36:54.601251    9948 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0127 12:36:54.601251    9948 command_runner.go:130] >   default                     busybox-58667487b6-ktfxc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0127 12:36:54.601251    9948 command_runner.go:130] >   kube-system                 kindnet-n7vjl               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	I0127 12:36:54.601251    9948 command_runner.go:130] >   kube-system                 kube-proxy-pjhc8            0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0127 12:36:54.601251    9948 command_runner.go:130] > Allocated resources:
	I0127 12:36:54.601251    9948 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Resource           Requests   Limits
	I0127 12:36:54.601251    9948 command_runner.go:130] >   --------           --------   ------
	I0127 12:36:54.601251    9948 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0127 12:36:54.601251    9948 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0127 12:36:54.601251    9948 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0127 12:36:54.601251    9948 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0127 12:36:54.601251    9948 command_runner.go:130] > Events:
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0127 12:36:54.601251    9948 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-659000-m02 status is now: NodeHasSufficientMemory
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-659000-m02 status is now: NodeHasNoDiskPressure
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-659000-m02 status is now: NodeHasSufficientPID
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-659000-m02 event: Registered Node multinode-659000-m02 in Controller
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-659000-m02 status is now: NodeReady
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Normal  RegisteredNode           70s                node-controller  Node multinode-659000-m02 event: Registered Node multinode-659000-m02 in Controller
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Normal  NodeNotReady             20s                node-controller  Node multinode-659000-m02 status is now: NodeNotReady
	I0127 12:36:54.601251    9948 command_runner.go:130] > Name:               multinode-659000-m03
	I0127 12:36:54.601251    9948 command_runner.go:130] > Roles:              <none>
	I0127 12:36:54.601251    9948 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     kubernetes.io/hostname=multinode-659000-m03
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     kubernetes.io/os=linux
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     minikube.k8s.io/name=multinode-659000
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_01_27T12_31_04_0700
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0127 12:36:54.601251    9948 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0127 12:36:54.602267    9948 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0127 12:36:54.602267    9948 command_runner.go:130] > CreationTimestamp:  Mon, 27 Jan 2025 12:31:04 +0000
	I0127 12:36:54.602267    9948 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0127 12:36:54.602267    9948 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0127 12:36:54.602267    9948 command_runner.go:130] > Unschedulable:      false
	I0127 12:36:54.602267    9948 command_runner.go:130] > Lease:
	I0127 12:36:54.602267    9948 command_runner.go:130] >   HolderIdentity:  multinode-659000-m03
	I0127 12:36:54.602267    9948 command_runner.go:130] >   AcquireTime:     <unset>
	I0127 12:36:54.602267    9948 command_runner.go:130] >   RenewTime:       Mon, 27 Jan 2025 12:32:15 +0000
	I0127 12:36:54.602267    9948 command_runner.go:130] > Conditions:
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0127 12:36:54.602267    9948 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0127 12:36:54.602267    9948 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 27 Jan 2025 12:31:22 +0000   Mon, 27 Jan 2025 12:33:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:54.602267    9948 command_runner.go:130] >   DiskPressure     Unknown   Mon, 27 Jan 2025 12:31:22 +0000   Mon, 27 Jan 2025 12:33:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:54.602267    9948 command_runner.go:130] >   PIDPressure      Unknown   Mon, 27 Jan 2025 12:31:22 +0000   Mon, 27 Jan 2025 12:33:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Ready            Unknown   Mon, 27 Jan 2025 12:31:22 +0000   Mon, 27 Jan 2025 12:33:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:54.602267    9948 command_runner.go:130] > Addresses:
	I0127 12:36:54.602267    9948 command_runner.go:130] >   InternalIP:  172.29.206.88
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Hostname:    multinode-659000-m03
	I0127 12:36:54.602267    9948 command_runner.go:130] > Capacity:
	I0127 12:36:54.602267    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:54.602267    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:54.602267    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:54.602267    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:54.602267    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:54.602267    9948 command_runner.go:130] > Allocatable:
	I0127 12:36:54.602267    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:54.602267    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:54.602267    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:54.602267    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:54.602267    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:54.602267    9948 command_runner.go:130] > System Info:
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Machine ID:                 5cd7b7bdbad940e0831e949f70fdd5af
	I0127 12:36:54.602267    9948 command_runner.go:130] >   System UUID:                bab0a90b-9ed8-ba42-88b9-fc6568ad7a53
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Boot ID:                    9d0d04c8-71ef-487a-a13c-e1de6463b3fe
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Kernel Version:             5.10.207
	I0127 12:36:54.602267    9948 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Operating System:           linux
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Architecture:               amd64
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0127 12:36:54.602267    9948 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0127 12:36:54.602267    9948 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0127 12:36:54.602267    9948 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0127 12:36:54.602267    9948 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0127 12:36:54.602267    9948 command_runner.go:130] >   kube-system                 kindnet-kpfjt       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      17m
	I0127 12:36:54.602267    9948 command_runner.go:130] >   kube-system                 kube-proxy-sk5js    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	I0127 12:36:54.602267    9948 command_runner.go:130] > Allocated resources:
	I0127 12:36:54.602267    9948 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Resource           Requests   Limits
	I0127 12:36:54.602267    9948 command_runner.go:130] >   --------           --------   ------
	I0127 12:36:54.602267    9948 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0127 12:36:54.602267    9948 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0127 12:36:54.602267    9948 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0127 12:36:54.602267    9948 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0127 12:36:54.602267    9948 command_runner.go:130] > Events:
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0127 12:36:54.602267    9948 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Normal  Starting                 5m46s                  kube-proxy       
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Normal  NodeHasSufficientMemory  17m (x2 over 17m)      kubelet          Node multinode-659000-m03 status is now: NodeHasSufficientMemory
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Normal  NodeHasSufficientPID     17m (x2 over 17m)      kubelet          Node multinode-659000-m03 status is now: NodeHasSufficientPID
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Normal  NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    17m (x2 over 17m)      kubelet          Node multinode-659000-m03 status is now: NodeHasNoDiskPressure
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-659000-m03 status is now: NodeReady
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Normal  Starting                 5m51s                  kubelet          Starting kubelet.
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Normal  CIDRAssignmentFailed     5m50s                  cidrAllocator    Node multinode-659000-m03 status is now: CIDRAssignmentFailed
	I0127 12:36:54.603266    9948 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m50s (x2 over 5m50s)  kubelet          Node multinode-659000-m03 status is now: NodeHasSufficientMemory
	I0127 12:36:54.603266    9948 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m50s (x2 over 5m50s)  kubelet          Node multinode-659000-m03 status is now: NodeHasNoDiskPressure
	I0127 12:36:54.603266    9948 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m50s (x2 over 5m50s)  kubelet          Node multinode-659000-m03 status is now: NodeHasSufficientPID
	I0127 12:36:54.603266    9948 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m50s                  kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:54.603266    9948 command_runner.go:130] >   Normal  RegisteredNode           5m46s                  node-controller  Node multinode-659000-m03 event: Registered Node multinode-659000-m03 in Controller
	I0127 12:36:54.603266    9948 command_runner.go:130] >   Normal  NodeReady                5m32s                  kubelet          Node multinode-659000-m03 status is now: NodeReady
	I0127 12:36:54.603266    9948 command_runner.go:130] >   Normal  NodeNotReady             3m46s                  node-controller  Node multinode-659000-m03 status is now: NodeNotReady
	I0127 12:36:54.603266    9948 command_runner.go:130] >   Normal  RegisteredNode           70s                    node-controller  Node multinode-659000-m03 event: Registered Node multinode-659000-m03 in Controller
	I0127 12:36:54.612412    9948 logs.go:123] Gathering logs for kube-proxy [bbec7ccef7da] ...
	I0127 12:36:54.612412    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbec7ccef7da"
	I0127 12:36:54.652262    9948 command_runner.go:130] ! I0127 12:12:05.290111       1 server_linux.go:66] "Using iptables proxy"
	I0127 12:36:54.653105    9948 command_runner.go:130] ! E0127 12:12:05.321300       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0127 12:36:54.653105    9948 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0127 12:36:54.653179    9948 command_runner.go:130] ! 	add table ip kube-proxy
	I0127 12:36:54.653179    9948 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:54.653179    9948 command_runner.go:130] !  >
	I0127 12:36:54.653179    9948 command_runner.go:130] ! E0127 12:12:05.352123       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0127 12:36:54.653179    9948 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0127 12:36:54.653179    9948 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0127 12:36:54.653179    9948 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:54.653260    9948 command_runner.go:130] !  >
	I0127 12:36:54.653260    9948 command_runner.go:130] ! I0127 12:12:05.378799       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.204.17"]
	I0127 12:36:54.653310    9948 command_runner.go:130] ! E0127 12:12:05.378872       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 12:36:54.653310    9948 command_runner.go:130] ! I0127 12:12:05.470419       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 12:36:54.653310    9948 command_runner.go:130] ! I0127 12:12:05.470552       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 12:36:54.653373    9948 command_runner.go:130] ! I0127 12:12:05.470596       1 server_linux.go:170] "Using iptables Proxier"
	I0127 12:36:54.653398    9948 command_runner.go:130] ! I0127 12:12:05.475557       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 12:36:54.653428    9948 command_runner.go:130] ! I0127 12:12:05.476697       1 server.go:497] "Version info" version="v1.32.1"
	I0127 12:36:54.653428    9948 command_runner.go:130] ! I0127 12:12:05.476717       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:54.653428    9948 command_runner.go:130] ! I0127 12:12:05.478788       1 config.go:199] "Starting service config controller"
	I0127 12:36:54.653428    9948 command_runner.go:130] ! I0127 12:12:05.478844       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 12:36:54.653428    9948 command_runner.go:130] ! I0127 12:12:05.478916       1 config.go:105] "Starting endpoint slice config controller"
	I0127 12:36:54.653428    9948 command_runner.go:130] ! I0127 12:12:05.479018       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 12:36:54.653428    9948 command_runner.go:130] ! I0127 12:12:05.480053       1 config.go:329] "Starting node config controller"
	I0127 12:36:54.653428    9948 command_runner.go:130] ! I0127 12:12:05.480113       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 12:36:54.653428    9948 command_runner.go:130] ! I0127 12:12:05.579605       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 12:36:54.653428    9948 command_runner.go:130] ! I0127 12:12:05.579669       1 shared_informer.go:320] Caches are synced for service config
	I0127 12:36:54.653428    9948 command_runner.go:130] ! I0127 12:12:05.580463       1 shared_informer.go:320] Caches are synced for node config
	I0127 12:36:54.656163    9948 logs.go:123] Gathering logs for kube-controller-manager [8d4872cda28d] ...
	I0127 12:36:54.656219    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4872cda28d"
	I0127 12:36:54.685743    9948 command_runner.go:130] ! I0127 12:35:39.384985       1 serving.go:386] Generated self-signed cert in-memory
	I0127 12:36:54.685795    9948 command_runner.go:130] ! I0127 12:35:39.805936       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0127 12:36:54.685795    9948 command_runner.go:130] ! I0127 12:35:39.811206       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:54.685795    9948 command_runner.go:130] ! I0127 12:35:39.817632       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0127 12:36:54.685795    9948 command_runner.go:130] ! I0127 12:35:39.822579       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:54.685893    9948 command_runner.go:130] ! I0127 12:35:39.822772       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 12:36:54.685893    9948 command_runner.go:130] ! I0127 12:35:39.823033       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:54.685893    9948 command_runner.go:130] ! I0127 12:35:43.406116       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0127 12:36:54.685893    9948 command_runner.go:130] ! I0127 12:35:43.407249       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0127 12:36:54.685972    9948 command_runner.go:130] ! I0127 12:35:43.417237       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0127 12:36:54.685972    9948 command_runner.go:130] ! I0127 12:35:43.417292       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0127 12:36:54.685972    9948 command_runner.go:130] ! I0127 12:35:43.417300       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0127 12:36:54.685972    9948 command_runner.go:130] ! I0127 12:35:43.417307       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0127 12:36:54.685972    9948 command_runner.go:130] ! I0127 12:35:43.417506       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0127 12:36:54.685972    9948 command_runner.go:130] ! I0127 12:35:43.417534       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0127 12:36:54.685972    9948 command_runner.go:130] ! I0127 12:35:43.417553       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0127 12:36:54.686068    9948 command_runner.go:130] ! I0127 12:35:43.431621       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0127 12:36:54.686096    9948 command_runner.go:130] ! I0127 12:35:43.431964       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0127 12:36:54.686096    9948 command_runner.go:130] ! I0127 12:35:43.431989       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0127 12:36:54.686096    9948 command_runner.go:130] ! I0127 12:35:43.432010       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0127 12:36:54.686096    9948 command_runner.go:130] ! I0127 12:35:43.442961       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0127 12:36:54.686096    9948 command_runner.go:130] ! I0127 12:35:43.447308       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0127 12:36:54.686174    9948 command_runner.go:130] ! I0127 12:35:43.447396       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0127 12:36:54.686174    9948 command_runner.go:130] ! I0127 12:35:43.449412       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0127 12:36:54.686174    9948 command_runner.go:130] ! I0127 12:35:43.449608       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0127 12:36:54.686234    9948 command_runner.go:130] ! I0127 12:35:43.466583       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0127 12:36:54.686258    9948 command_runner.go:130] ! I0127 12:35:43.467490       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0127 12:36:54.686258    9948 command_runner.go:130] ! I0127 12:35:43.467508       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0127 12:36:54.686258    9948 command_runner.go:130] ! I0127 12:35:43.491988       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0127 12:36:54.686307    9948 command_runner.go:130] ! I0127 12:35:43.493672       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0127 12:36:54.686329    9948 command_runner.go:130] ! I0127 12:35:43.493698       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.498557       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.503953       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.503976       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.505729       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.505861       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.505872       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.509718       1 shared_informer.go:320] Caches are synced for tokens
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.510192       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.510208       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.510698       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.510714       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.512896       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.513433       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.513448       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.516433       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.516659       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.516671       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.524334       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.524358       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.524545       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.524557       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.534871       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.535028       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.535038       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.557745       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.557975       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.612615       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.612890       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.612906       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.616333       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.627087       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.627107       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.692864       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0127 12:36:54.686907    9948 command_runner.go:130] ! I0127 12:35:43.692892       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0127 12:36:54.686907    9948 command_runner.go:130] ! I0127 12:35:43.693095       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0127 12:36:54.686969    9948 command_runner.go:130] ! I0127 12:35:43.700796       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0127 12:36:54.686969    9948 command_runner.go:130] ! I0127 12:35:43.703832       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0127 12:36:54.687017    9948 command_runner.go:130] ! I0127 12:35:43.703867       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0127 12:36:54.687043    9948 command_runner.go:130] ! I0127 12:35:43.713912       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0127 12:36:54.687043    9948 command_runner.go:130] ! I0127 12:35:43.714114       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0127 12:36:54.687043    9948 command_runner.go:130] ! I0127 12:35:43.714094       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0127 12:36:54.687043    9948 command_runner.go:130] ! I0127 12:35:43.714712       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0127 12:36:54.687043    9948 command_runner.go:130] ! I0127 12:35:43.714721       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0127 12:36:54.687107    9948 command_runner.go:130] ! I0127 12:35:43.721904       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0127 12:36:54.687131    9948 command_runner.go:130] ! I0127 12:35:43.722372       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0127 12:36:54.687177    9948 command_runner.go:130] ! I0127 12:35:43.723076       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0127 12:36:54.687177    9948 command_runner.go:130] ! I0127 12:35:43.739709       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.739886       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.739897       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.748074       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.748419       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.748432       1 shared_informer.go:313] Waiting for caches to sync for job
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.774085       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.774108       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.774196       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.814844       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.815383       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.815410       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! W0127 12:35:43.815432       1 shared_informer.go:597] resyncPeriod 17h46m45.188948257s is smaller than resyncCheckPeriod 20h1m58.14772951s and the informer has already started. Changing it to 20h1m58.14772951s
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.815487       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.815503       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.816077       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.816613       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.817053       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.817252       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.817373       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.817397       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! W0127 12:35:43.818105       1 shared_informer.go:597] resyncPeriod 12h27m56.377400464s is smaller than resyncCheckPeriod 20h1m58.14772951s and the informer has already started. Changing it to 20h1m58.14772951s
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.818223       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.818270       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.818295       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.818319       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.818336       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.818363       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.818376       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.818392       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.818410       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.818442       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0127 12:36:54.687781    9948 command_runner.go:130] ! I0127 12:35:43.818764       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0127 12:36:54.687781    9948 command_runner.go:130] ! I0127 12:35:43.818778       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0127 12:36:54.687781    9948 command_runner.go:130] ! I0127 12:35:43.819843       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0127 12:36:54.687781    9948 command_runner.go:130] ! I0127 12:35:43.841955       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0127 12:36:54.687861    9948 command_runner.go:130] ! I0127 12:35:43.842559       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0127 12:36:54.687861    9948 command_runner.go:130] ! I0127 12:35:43.842587       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0127 12:36:54.687861    9948 command_runner.go:130] ! I0127 12:35:43.842995       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0127 12:36:54.687916    9948 command_runner.go:130] ! I0127 12:35:43.852026       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0127 12:36:54.687943    9948 command_runner.go:130] ! I0127 12:35:43.852211       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0127 12:36:54.687943    9948 command_runner.go:130] ! I0127 12:35:43.852253       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0127 12:36:54.687943    9948 command_runner.go:130] ! I0127 12:35:43.922876       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0127 12:36:54.687943    9948 command_runner.go:130] ! I0127 12:35:43.923019       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0127 12:36:54.687943    9948 command_runner.go:130] ! I0127 12:35:43.923033       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0127 12:36:54.688025    9948 command_runner.go:130] ! I0127 12:35:43.962858       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0127 12:36:54.688025    9948 command_runner.go:130] ! I0127 12:35:43.962895       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0127 12:36:54.688106    9948 command_runner.go:130] ! I0127 12:35:43.963021       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0127 12:36:54.688180    9948 command_runner.go:130] ! I0127 12:35:43.963037       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0127 12:36:54.688202    9948 command_runner.go:130] ! I0127 12:35:44.014798       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0127 12:36:54.688202    9948 command_runner.go:130] ! I0127 12:35:44.016438       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0127 12:36:54.688202    9948 command_runner.go:130] ! I0127 12:35:44.016458       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0127 12:36:54.688202    9948 command_runner.go:130] ! I0127 12:35:44.066881       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0127 12:36:54.688255    9948 command_runner.go:130] ! I0127 12:35:44.067018       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0127 12:36:54.688255    9948 command_runner.go:130] ! I0127 12:35:44.067064       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0127 12:36:54.688303    9948 command_runner.go:130] ! W0127 12:35:44.227808       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.236233       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.236429       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.236541       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.236556       1 shared_informer.go:313] Waiting for caches to sync for node
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.261051       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.261341       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.261374       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.314220       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.314319       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.314352       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.364392       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.364625       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.365833       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.365937       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.365975       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.365977       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.367697       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.368067       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.368427       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.369763       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.370290       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.370408       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.370568       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.412258       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.412274       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.412282       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.412297       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.412368       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.412379       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.517568       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0127 12:36:54.688822    9948 command_runner.go:130] ! I0127 12:35:44.517771       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0127 12:36:54.688822    9948 command_runner.go:130] ! I0127 12:35:44.518074       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0127 12:36:54.688865    9948 command_runner.go:130] ! I0127 12:35:44.518288       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0127 12:36:54.688865    9948 command_runner.go:130] ! I0127 12:35:44.564449       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0127 12:36:54.688865    9948 command_runner.go:130] ! I0127 12:35:44.564546       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0127 12:36:54.688865    9948 command_runner.go:130] ! I0127 12:35:44.564657       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0127 12:36:54.688865    9948 command_runner.go:130] ! I0127 12:35:44.591265       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0127 12:36:54.688865    9948 command_runner.go:130] ! I0127 12:35:44.663628       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0127 12:36:54.688963    9948 command_runner.go:130] ! I0127 12:35:44.727283       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0127 12:36:54.688963    9948 command_runner.go:130] ! I0127 12:35:44.739370       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000\" does not exist"
	I0127 12:36:54.689018    9948 command_runner.go:130] ! I0127 12:35:44.739797       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m02\" does not exist"
	I0127 12:36:54.689042    9948 command_runner.go:130] ! I0127 12:35:44.740184       1 shared_informer.go:320] Caches are synced for crt configmap
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.740835       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m03\" does not exist"
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.747985       1 shared_informer.go:320] Caches are synced for GC
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.748593       1 shared_informer.go:320] Caches are synced for job
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.765439       1 shared_informer.go:320] Caches are synced for cronjob
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.765669       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.765982       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.766264       1 shared_informer.go:320] Caches are synced for expand
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.766617       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.767305       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.767462       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.768217       1 shared_informer.go:320] Caches are synced for stateful set
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.766681       1 shared_informer.go:320] Caches are synced for daemon sets
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.774887       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.775167       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.775269       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.775418       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.778028       1 shared_informer.go:320] Caches are synced for HPA
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.793610       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.793916       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.798773       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.805302       1 shared_informer.go:320] Caches are synced for PVC protection
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.805404       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.806234       1 shared_informer.go:320] Caches are synced for PV protection
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.811621       1 shared_informer.go:320] Caches are synced for ephemeral
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.813099       1 shared_informer.go:320] Caches are synced for TTL
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.813420       1 shared_informer.go:320] Caches are synced for namespace
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.813655       1 shared_informer.go:320] Caches are synced for deployment
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.815238       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.819201       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.819433       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.820006       1 shared_informer.go:320] Caches are synced for disruption
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.821695       1 shared_informer.go:320] Caches are synced for taint
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.821905       1 shared_informer.go:320] Caches are synced for endpoint
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.824479       1 shared_informer.go:320] Caches are synced for persistent volume
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.824852       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.825228       1 shared_informer.go:320] Caches are synced for attach detach
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.825784       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.836209       1 shared_informer.go:320] Caches are synced for service account
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.836651       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.836969       1 shared_informer.go:320] Caches are synced for node
	I0127 12:36:54.689598    9948 command_runner.go:130] ! I0127 12:35:44.838015       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0127 12:36:54.689598    9948 command_runner.go:130] ! I0127 12:35:44.838049       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0127 12:36:54.689656    9948 command_runner.go:130] ! I0127 12:35:44.838058       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0127 12:36:54.689656    9948 command_runner.go:130] ! I0127 12:35:44.838065       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0127 12:36:54.689656    9948 command_runner.go:130] ! I0127 12:35:44.838200       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.689694    9948 command_runner.go:130] ! I0127 12:35:44.838217       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.689694    9948 command_runner.go:130] ! I0127 12:35:44.838227       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.689694    9948 command_runner.go:130] ! I0127 12:35:44.844908       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:36:54.689784    9948 command_runner.go:130] ! I0127 12:35:44.845551       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0127 12:36:54.689784    9948 command_runner.go:130] ! I0127 12:35:44.845777       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0127 12:36:54.689784    9948 command_runner.go:130] ! I0127 12:35:44.898551       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.689784    9948 command_runner.go:130] ! I0127 12:35:44.899476       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.689846    9948 command_runner.go:130] ! I0127 12:35:44.900201       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000"
	I0127 12:36:54.689867    9948 command_runner.go:130] ! I0127 12:35:44.900496       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000-m02"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:35:44.900687       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000-m03"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:35:44.901405       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:35:44.984858       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:35:45.000632       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="180.930208ms"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:35:45.003909       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="39.2µs"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:35:45.016382       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="195.414857ms"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:35:45.016698       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="108.2µs"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:35:54.975850       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:36:32.834093       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:36:32.834425       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:36:32.855708       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:36:34.928482       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:36:34.940809       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:36:34.955742       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:36:35.025877       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="15.32946ms"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:36:35.026020       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="30.3µs"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:36:40.041357       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:36:47.580904       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="50.8µs"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:36:48.616631       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="19.328909ms"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:36:48.617909       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="35.8µs"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:36:48.650691       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="23.414753ms"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:36:48.651163       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="28.701µs"
	I0127 12:36:54.708049    9948 logs.go:123] Gathering logs for Docker ...
	I0127 12:36:54.708049    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0127 12:36:54.739374    9948 command_runner.go:130] > Jan 27 12:34:11 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0127 12:36:54.739374    9948 command_runner.go:130] > Jan 27 12:34:11 minikube cri-dockerd[223]: time="2025-01-27T12:34:11Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0127 12:36:54.739374    9948 command_runner.go:130] > Jan 27 12:34:11 minikube cri-dockerd[223]: time="2025-01-27T12:34:11Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0127 12:36:54.739374    9948 command_runner.go:130] > Jan 27 12:34:11 minikube cri-dockerd[223]: time="2025-01-27T12:34:11Z" level=info msg="Start docker client with request timeout 0s"
	I0127 12:36:54.739374    9948 command_runner.go:130] > Jan 27 12:34:11 minikube cri-dockerd[223]: time="2025-01-27T12:34:11Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0127 12:36:54.739374    9948 command_runner.go:130] > Jan 27 12:34:11 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:54.739374    9948 command_runner.go:130] > Jan 27 12:34:11 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0127 12:36:54.739374    9948 command_runner.go:130] > Jan 27 12:34:11 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0127 12:36:54.739374    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0127 12:36:54.739374    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0127 12:36:54.739374    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0127 12:36:54.739374    9948 command_runner.go:130] > Jan 27 12:34:14 minikube cri-dockerd[404]: time="2025-01-27T12:34:14Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0127 12:36:54.739374    9948 command_runner.go:130] > Jan 27 12:34:14 minikube cri-dockerd[404]: time="2025-01-27T12:34:14Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0127 12:36:54.739374    9948 command_runner.go:130] > Jan 27 12:34:14 minikube cri-dockerd[404]: time="2025-01-27T12:34:14Z" level=info msg="Start docker client with request timeout 0s"
	I0127 12:36:54.739905    9948 command_runner.go:130] > Jan 27 12:34:14 minikube cri-dockerd[404]: time="2025-01-27T12:34:14Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0127 12:36:54.739964    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:54.739964    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0127 12:36:54.739964    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0127 12:36:54.740066    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0127 12:36:54.740066    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0127 12:36:54.740066    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0127 12:36:54.740066    9948 command_runner.go:130] > Jan 27 12:34:16 minikube cri-dockerd[425]: time="2025-01-27T12:34:16Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:34:16 minikube cri-dockerd[425]: time="2025-01-27T12:34:16Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:34:16 minikube cri-dockerd[425]: time="2025-01-27T12:34:16Z" level=info msg="Start docker client with request timeout 0s"
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:34:16 minikube cri-dockerd[425]: time="2025-01-27T12:34:16Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 systemd[1]: Starting Docker Application Container Engine...
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[653]: time="2025-01-27T12:35:01.316616305Z" level=info msg="Starting up"
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[653]: time="2025-01-27T12:35:01.317424338Z" level=info msg="containerd not running, starting managed containerd"
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[653]: time="2025-01-27T12:35:01.318870498Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=659
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.350184287Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374094572Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374181575Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374315681Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374337282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374861203Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374889804Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375040811Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375239819Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:54.740655    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375267320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:54.740708    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375281220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:54.740708    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375833643Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:54.740708    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.376559373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:54.740797    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.379449292Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:54.740824    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.379538296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.379661901Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.379800807Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.380313228Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.380441533Z" level=info msg="metadata content store policy set" policy=shared
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.385960360Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386099266Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386121867Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386137768Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386151968Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386229971Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386475981Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386600687Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386685890Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386757893Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386815695Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386833196Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386854497Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386882698Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386897399Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386908999Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386920500Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386931000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386948401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386962701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387079606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387099107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387131708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.741407    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387149509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.741407    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387164010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.741466    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387179110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.741466    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387194311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.741466    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387212812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.741466    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387227412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.741556    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387242613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.741556    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387257314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.741556    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387275514Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0127 12:36:54.741556    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387300315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.741637    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387352418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.741637    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387385019Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0127 12:36:54.741637    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387423920Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0127 12:36:54.741694    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387443921Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387454422Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387465222Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387473923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387486423Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387496523Z" level=info msg="NRI interface is disabled by configuration."
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.388077647Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.388176351Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.388221553Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.388239554Z" level=info msg="containerd successfully booted in 0.040630s"
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:02 multinode-659000 dockerd[653]: time="2025-01-27T12:35:02.375461301Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:02 multinode-659000 dockerd[653]: time="2025-01-27T12:35:02.619440119Z" level=info msg="Loading containers: start."
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:02 multinode-659000 dockerd[653]: time="2025-01-27T12:35:02.931712674Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.079754338Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.199112944Z" level=info msg="Loading containers: done."
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.227370410Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.227394111Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.227415612Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.227924231Z" level=info msg="Daemon has completed initialization"
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.267619030Z" level=info msg="API listen on /var/run/docker.sock"
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.267851638Z" level=info msg="API listen on [::]:2376"
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 systemd[1]: Started Docker Application Container Engine.
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.208684124Z" level=info msg="Processing signal 'terminated'"
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.210887831Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.211188432Z" level=info msg="Daemon shutdown complete"
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.211249132Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.211349733Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 systemd[1]: Stopping Docker Application Container Engine...
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 systemd[1]: docker.service: Deactivated successfully.
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 systemd[1]: Stopped Docker Application Container Engine.
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 systemd[1]: Starting Docker Application Container Engine...
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:29.270852796Z" level=info msg="Starting up"
	I0127 12:36:54.742265    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:29.271817099Z" level=info msg="containerd not running, starting managed containerd"
	I0127 12:36:54.742265    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:29.272921603Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1109
	I0127 12:36:54.742265    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.304741210Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0127 12:36:54.742265    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329258592Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0127 12:36:54.742336    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329353092Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0127 12:36:54.742336    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329390892Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0127 12:36:54.742336    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329406192Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:54.742336    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329428593Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:54.742435    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329441293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:54.742454    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329563193Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329667793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329687993Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329698693Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329723194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329854194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.332844104Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.332945004Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333117005Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333187905Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333222205Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333244905Z" level=info msg="metadata content store policy set" policy=shared
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333669407Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333741907Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333760007Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333804107Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333825507Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333876808Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334348509Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334487410Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334670410Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334694510Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334722510Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334740210Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334754110Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334768211Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334783611Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.743033    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334797111Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.743117    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334827611Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334839711Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334900511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334918411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334939711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334956111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334972911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335000311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335303412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335328412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335345712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335365113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335379713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335394013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335408713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335432513Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335458213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335473813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335509613Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335706914Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335751914Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335766514Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335779214Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335790814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335808914Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335823714Z" level=info msg="NRI interface is disabled by configuration."
	I0127 12:36:54.743675    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.336050915Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0127 12:36:54.743726    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.336227915Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0127 12:36:54.743726    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.336312916Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0127 12:36:54.743726    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.336356016Z" level=info msg="containerd successfully booted in 0.033394s"
	I0127 12:36:54.743726    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.313483202Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0127 12:36:54.743726    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.352802934Z" level=info msg="Loading containers: start."
	I0127 12:36:54.743818    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.586901421Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0127 12:36:54.743876    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.690006868Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0127 12:36:54.743897    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.804531453Z" level=info msg="Loading containers: done."
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.832567747Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.832684748Z" level=info msg="Daemon has completed initialization"
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.868895669Z" level=info msg="API listen on /var/run/docker.sock"
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 systemd[1]: Started Docker Application Container Engine.
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.869822273Z" level=info msg="API listen on [::]:2376"
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Start docker client with request timeout 0s"
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Loaded network plugin cni"
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Start cri-dockerd grpc backend"
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:36Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-58667487b6-2jq9j_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"4c82c0ec4aeaa9b21462a8248326ae982d6f7a0aee31347f1a58d216f0335177\""
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:36Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-668d6bf9bc-2qw6w_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"4a53e133a1cd6ab9514cb15ac3c4f1d5683d17008b482cebb08bf4809e060709\""
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.148610487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.149713190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.744452    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.149731191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.744452    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.149823291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.744503    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.227312151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.744543    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.227946754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.744583    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.228465355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.744657    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.229058857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.744657    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b770a357d98307d140bf1525f91cca5fa9278f7f9428b9b956db31e6a36de7f2/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.326758786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.326897686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.327082287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.327397788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.340486032Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.340542232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.340557232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.340640833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/910315897d84204b3db03c56eaeac0c855a23f6250a406220a840c10e2dad7a7/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5601285bb260a8ced44a77e9dbb10f08580841c917885470ec5941525f08ee76/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cdf534e99b2bbcc52d3bf2ce73ef5d4299b5264cf0a050fa21ff7f6fe2bb3b2a/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.671974447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.672075247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.672094947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.673787353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.761333147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.761791949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.761989149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.763491554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.875104030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.875307231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.879314144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.879751245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.905404632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.745241    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.905473732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.905487532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.905580032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:41Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:42.944884578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:42.944962279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:42.944975379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:42.945417180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.028307259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.028541060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.028779960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.029212562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.033020375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.033338176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.033463276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.033775977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/34d579bb511fec290478f20b13002063b43c1a71bd6f2f45f1d83bbd8ac971ab/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b613e9a7a356580fd5381e358408317fd6120a119c23f3f196adda302e5ca97f/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d43e4cc62e0877d4b65191623d58195cd33c60eff33c6e49e605f69620d5115f/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.564400062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.564959364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.565260665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.565864167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.593549260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.745850    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.594548363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.745850    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.594809964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.745850    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.595677067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.745850    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.831064858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.745850    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.831237859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.745988    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.831252459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.746043    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.831462360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.746076    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:14.113708902Z" level=info msg="shim disconnected" id=9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f namespace=moby
	I0127 12:36:54.746076    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:14.113811702Z" level=warning msg="cleaning up after shim disconnected" id=9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f namespace=moby
	I0127 12:36:54.746076    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:14.113825002Z" level=info msg="cleaning up dead shim" namespace=moby
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 dockerd[1103]: time="2025-01-27T12:36:14.115914814Z" level=info msg="ignoring event" container=9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:27.602318882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:27.604079090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:27.604098490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:27.604656892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.795612113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.795786714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.796654617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.796995818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.861006350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.861082751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.861094651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.861334452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:36:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6b22dbb5ef3e0d283203499fffad001c9c20c643564a55e5bfa5d6352f80e178/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:36:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ef504f99724cba01531b3894329439ae069a4ccac272e31bfac333cc24e62c53/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.321502068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.321825070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.321903471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.322491776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.384958874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.385201176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.385326577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.385735080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.772096    9948 logs.go:123] Gathering logs for coredns [b3a9ed6e130c] ...
	I0127 12:36:54.772096    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a9ed6e130c"
	I0127 12:36:54.800524    9948 command_runner.go:130] > .:53
	I0127 12:36:54.800524    9948 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 5e2e325279dfa828a8fd1b44d83ab4703abb0247d4beadde42157147650fe687c0862eaa4caa15a5d9139c48c9a9dd5ec3cd962ba60368e8ffb4d02ae4d29aeb
	I0127 12:36:54.800524    9948 command_runner.go:130] > CoreDNS-1.11.3
	I0127 12:36:54.800524    9948 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0127 12:36:54.800524    9948 command_runner.go:130] > [INFO] 127.0.0.1:47464 - 34099 "HINFO IN 5313391549706874198.1206200090770907475. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.062040871s
	I0127 12:36:54.800524    9948 logs.go:123] Gathering logs for coredns [f818dd15d8b0] ...
	I0127 12:36:54.800524    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f818dd15d8b0"
	I0127 12:36:54.829398    9948 command_runner.go:130] > .:53
	I0127 12:36:54.829398    9948 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 5e2e325279dfa828a8fd1b44d83ab4703abb0247d4beadde42157147650fe687c0862eaa4caa15a5d9139c48c9a9dd5ec3cd962ba60368e8ffb4d02ae4d29aeb
	I0127 12:36:54.829398    9948 command_runner.go:130] > CoreDNS-1.11.3
	I0127 12:36:54.829398    9948 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 127.0.0.1:50782 - 35950 "HINFO IN 8787717511470146079.8254135695837817311. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.151481959s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:56186 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000430505s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:58756 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.126738988s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:36399 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.053330342s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:35359 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.140941591s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:41150 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000220803s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:57591 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0000709s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:45132 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000133002s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:48593 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000728s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:53274 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000261802s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:57676 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.069110701s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:59948 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000178302s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:39801 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000198802s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:45673 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.023238636s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:42840 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154002s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:43505 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000181002s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:34935 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092101s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:54822 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155102s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:50877 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000188102s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:45384 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183802s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:35073 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000227202s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:50517 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000061101s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:37353 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130501s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:42117 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114301s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:46171 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060401s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:55282 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117601s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:41761 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162301s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:35358 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000218902s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:50342 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124402s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:38159 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159602s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:37043 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000171002s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:50762 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000168301s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:33014 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000603s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:34941 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134301s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:60117 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000393904s
	I0127 12:36:54.830058    9948 command_runner.go:130] > [INFO] 10.244.0.3:47506 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000214402s
	I0127 12:36:54.830058    9948 command_runner.go:130] > [INFO] 10.244.0.3:42968 - 5 "PTR IN 1.192.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000443604s
	I0127 12:36:54.830114    9948 command_runner.go:130] > [INFO] 10.244.1.2:52260 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193802s
	I0127 12:36:54.830114    9948 command_runner.go:130] > [INFO] 10.244.1.2:40492 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000310903s
	I0127 12:36:54.830114    9948 command_runner.go:130] > [INFO] 10.244.1.2:50341 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074s
	I0127 12:36:54.830114    9948 command_runner.go:130] > [INFO] 10.244.1.2:41676 - 5 "PTR IN 1.192.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000637s
	I0127 12:36:54.830114    9948 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0127 12:36:54.830114    9948 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0127 12:36:54.832376    9948 logs.go:123] Gathering logs for kindnet [373bec67270f] ...
	I0127 12:36:54.832376    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 373bec67270f"
	I0127 12:36:54.859435    9948 command_runner.go:130] ! I0127 12:35:44.464092       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0127 12:36:54.859435    9948 command_runner.go:130] ! I0127 12:35:44.489651       1 main.go:139] hostIP = 172.29.198.106
	I0127 12:36:54.859541    9948 command_runner.go:130] ! podIP = 172.29.198.106
	I0127 12:36:54.859541    9948 command_runner.go:130] ! I0127 12:35:44.489794       1 main.go:148] setting mtu 1500 for CNI 
	I0127 12:36:54.859541    9948 command_runner.go:130] ! I0127 12:35:44.489865       1 main.go:178] kindnetd IP family: "ipv4"
	I0127 12:36:54.859541    9948 command_runner.go:130] ! I0127 12:35:44.490024       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0127 12:36:54.859541    9948 command_runner.go:130] ! I0127 12:35:45.397363       1 main.go:239] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-40: Error: Could not process rule: Operation not supported
	I0127 12:36:54.859623    9948 command_runner.go:130] ! add table inet kindnet-network-policies
	I0127 12:36:54.859623    9948 command_runner.go:130] ! ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:54.859623    9948 command_runner.go:130] ! , skipping network policies
	I0127 12:36:54.859661    9948 command_runner.go:130] ! W0127 12:36:15.407551       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0127 12:36:54.859661    9948 command_runner.go:130] ! E0127 12:36:15.407870       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError"
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:25.405793       1 main.go:297] Handling node with IPs: map[172.29.198.106:{}]
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:25.405967       1 main.go:301] handling current node
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:25.406822       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:25.406903       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:25.408014       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.29.199.129 Flags: [] Table: 0 Realm: 0} 
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:25.408956       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:25.409055       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:25.409321       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.29.206.88 Flags: [] Table: 0 Realm: 0} 
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:35.400986       1 main.go:297] Handling node with IPs: map[172.29.198.106:{}]
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:35.401115       1 main.go:301] handling current node
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:35.401203       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:35.401377       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:35.401789       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:35.401927       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:45.400837       1 main.go:297] Handling node with IPs: map[172.29.198.106:{}]
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:45.401002       1 main.go:301] handling current node
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:45.401061       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:45.401072       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:45.401385       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:45.401462       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.861566    9948 logs.go:123] Gathering logs for kindnet [d758000dda95] ...
	I0127 12:36:54.861566    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d758000dda95"
	I0127 12:36:54.887046    9948 command_runner.go:130] ! I0127 12:22:14.854106       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.887809    9948 command_runner.go:130] ! I0127 12:22:14.855096       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.887809    9948 command_runner.go:130] ! I0127 12:22:14.855184       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.887809    9948 command_runner.go:130] ! I0127 12:22:24.859265       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.887897    9948 command_runner.go:130] ! I0127 12:22:24.859464       1 main.go:301] handling current node
	I0127 12:36:54.887897    9948 command_runner.go:130] ! I0127 12:22:24.859638       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.887982    9948 command_runner.go:130] ! I0127 12:22:24.859681       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.887999    9948 command_runner.go:130] ! I0127 12:22:24.860150       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.887999    9948 command_runner.go:130] ! I0127 12:22:24.860242       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.888022    9948 command_runner.go:130] ! I0127 12:22:34.860201       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.888042    9948 command_runner.go:130] ! I0127 12:22:34.860282       1 main.go:301] handling current node
	I0127 12:36:54.888077    9948 command_runner.go:130] ! I0127 12:22:34.860531       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.888077    9948 command_runner.go:130] ! I0127 12:22:34.860551       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.888106    9948 command_runner.go:130] ! I0127 12:22:34.861114       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.888126    9948 command_runner.go:130] ! I0127 12:22:34.861204       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.888126    9948 command_runner.go:130] ! I0127 12:22:44.853677       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.888126    9948 command_runner.go:130] ! I0127 12:22:44.853737       1 main.go:301] handling current node
	I0127 12:36:54.888164    9948 command_runner.go:130] ! I0127 12:22:44.853761       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.888164    9948 command_runner.go:130] ! I0127 12:22:44.853838       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.888200    9948 command_runner.go:130] ! I0127 12:22:44.855661       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.888200    9948 command_runner.go:130] ! I0127 12:22:44.855749       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.888236    9948 command_runner.go:130] ! I0127 12:22:54.856510       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.888236    9948 command_runner.go:130] ! I0127 12:22:54.856632       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.888323    9948 command_runner.go:130] ! I0127 12:22:54.857002       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.888345    9948 command_runner.go:130] ! I0127 12:22:54.857030       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.888345    9948 command_runner.go:130] ! I0127 12:22:54.857252       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:22:54.857371       1 main.go:301] handling current node
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:04.859476       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:04.859579       1 main.go:301] handling current node
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:04.859615       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:04.859623       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:04.859972       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:04.859987       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:14.853396       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:14.853515       1 main.go:301] handling current node
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:14.853537       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:14.853546       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:14.853802       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:14.853843       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:24.853600       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:24.853883       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:24.854392       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:24.854484       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:24.854688       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:24.854773       1 main.go:301] handling current node
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:34.853542       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:34.853600       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:34.854132       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:34.854286       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:34.854787       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:34.854920       1 main.go:301] handling current node
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:44.856707       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:44.856833       1 main.go:301] handling current node
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:44.856869       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:44.856877       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:44.857371       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.888925    9948 command_runner.go:130] ! I0127 12:23:44.857460       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.888969    9948 command_runner.go:130] ! I0127 12:23:54.853590       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.888969    9948 command_runner.go:130] ! I0127 12:23:54.853737       1 main.go:301] handling current node
	I0127 12:36:54.888969    9948 command_runner.go:130] ! I0127 12:23:54.853759       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.888969    9948 command_runner.go:130] ! I0127 12:23:54.853768       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889009    9948 command_runner.go:130] ! I0127 12:23:54.854333       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889033    9948 command_runner.go:130] ! I0127 12:23:54.854403       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889033    9948 command_runner.go:130] ! I0127 12:24:04.862983       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889033    9948 command_runner.go:130] ! I0127 12:24:04.863248       1 main.go:301] handling current node
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:04.863599       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:04.863808       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:04.864418       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:04.864558       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:14.854114       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:14.854152       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:14.854412       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:14.854490       1 main.go:301] handling current node
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:14.854619       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:14.854711       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:24.857372       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:24.857503       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:24.857861       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:24.857991       1 main.go:301] handling current node
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:24.858058       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:24.858126       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:34.854371       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:34.854425       1 main.go:301] handling current node
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:34.854444       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:34.854451       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:34.855276       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:34.855359       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:44.862967       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:44.863069       1 main.go:301] handling current node
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:44.863118       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:44.863132       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:44.863438       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:44.863559       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:54.856232       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:54.856343       1 main.go:301] handling current node
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:54.856417       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:54.856429       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:54.857056       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:54.857188       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:25:04.853438       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:25:04.853551       1 main.go:301] handling current node
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:25:04.853573       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:25:04.853581       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:25:04.853903       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:25:04.853979       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889638    9948 command_runner.go:130] ! I0127 12:25:14.854463       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889638    9948 command_runner.go:130] ! I0127 12:25:14.854571       1 main.go:301] handling current node
	I0127 12:36:54.889638    9948 command_runner.go:130] ! I0127 12:25:14.854614       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889638    9948 command_runner.go:130] ! I0127 12:25:14.854630       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889638    9948 command_runner.go:130] ! I0127 12:25:14.855124       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889638    9948 command_runner.go:130] ! I0127 12:25:14.855157       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889724    9948 command_runner.go:130] ! I0127 12:25:24.853742       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889724    9948 command_runner.go:130] ! I0127 12:25:24.853838       1 main.go:301] handling current node
	I0127 12:36:54.889724    9948 command_runner.go:130] ! I0127 12:25:24.853859       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889724    9948 command_runner.go:130] ! I0127 12:25:24.853866       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889724    9948 command_runner.go:130] ! I0127 12:25:24.854822       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889777    9948 command_runner.go:130] ! I0127 12:25:24.854982       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889777    9948 command_runner.go:130] ! I0127 12:25:34.853374       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889777    9948 command_runner.go:130] ! I0127 12:25:34.853516       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889777    9948 command_runner.go:130] ! I0127 12:25:34.853756       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:34.853919       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:34.854285       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:34.854360       1 main.go:301] handling current node
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:44.855075       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:44.855182       1 main.go:301] handling current node
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:44.855201       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:44.855209       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:44.856108       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:44.856191       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:54.854358       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:54.854550       1 main.go:301] handling current node
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:54.854584       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:54.854606       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:54.854829       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:54.854893       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:04.853425       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:04.853480       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:04.854150       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:04.854221       1 main.go:301] handling current node
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:04.854322       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:04.854350       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:14.853895       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:14.854577       1 main.go:301] handling current node
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:14.854615       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:14.854639       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:14.856224       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:14.856319       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:24.858046       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:24.858200       1 main.go:301] handling current node
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:24.858527       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:24.858599       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:24.859022       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:24.859118       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:34.853783       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:34.853853       1 main.go:301] handling current node
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:34.853871       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:34.853878       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:34.854193       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:34.854260       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:44.856492       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:44.856552       1 main.go:301] handling current node
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:44.856569       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:44.856575       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:44.857163       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:44.857246       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:54.858285       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:54.858431       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:54.859101       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:54.859322       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:54.859474       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:54.859544       1 main.go:301] handling current node
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:04.858831       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:04.858967       1 main.go:301] handling current node
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:04.859484       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:04.859592       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:04.860213       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:04.860314       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:14.854313       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:14.854366       1 main.go:301] handling current node
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:14.854386       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:14.854394       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:14.854883       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:14.855322       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:24.859182       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:24.859342       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:24.859757       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:24.859824       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:24.860078       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:24.860255       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:34.854206       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:34.854462       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:34.854567       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:34.854657       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:34.855188       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:34.855233       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:44.861342       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:44.861572       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:44.862224       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:44.862399       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:44.862648       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:44.862687       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:54.853605       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:54.853658       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:54.853924       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:54.854125       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:54.854203       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:54.854216       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:04.859858       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:04.859922       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:04.859984       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:04.860038       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:04.860336       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:04.860450       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:14.853470       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:14.853607       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:14.853627       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:14.853634       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:14.854800       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:14.854899       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:24.853786       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:24.853841       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:24.854051       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:24.854078       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:24.854192       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:24.854297       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:34.853571       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:34.853730       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:34.853756       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:34.853765       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:34.853988       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:34.854180       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:44.853630       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:44.854161       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:44.854753       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:44.854886       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:44.855270       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:44.855393       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:54.856731       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:54.856780       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:54.856800       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:54.856807       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:54.857466       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:54.857531       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:04.853996       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:04.854093       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:04.854113       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:04.854120       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:04.854865       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:04.855000       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:14.853874       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:14.854279       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:14.854677       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:14.854896       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:14.855469       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:14.856845       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:24.853660       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:24.853766       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:24.853786       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:24.853793       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:24.854261       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:24.854541       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:34.861616       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:34.861807       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:34.862166       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:34.862228       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:34.862400       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:34.862455       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:44.854294       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:44.854418       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:44.854439       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:44.854448       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:44.854699       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:44.854776       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:54.853707       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:54.853780       1 main.go:301] handling current node
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:29:54.853914       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:29:54.854022       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:29:54.854423       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:29:54.854566       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:04.853625       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:04.853820       1 main.go:301] handling current node
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:04.854002       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:04.854301       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:04.854878       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:04.854986       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:14.853537       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:14.853729       1 main.go:301] handling current node
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:14.853749       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:14.853756       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:14.855013       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:14.855147       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:24.853563       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:24.853757       1 main.go:301] handling current node
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:24.853779       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:24.853786       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:24.854220       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:24.854327       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:34.858899       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:34.859124       1 main.go:301] handling current node
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:34.859146       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:34.859676       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:34.860572       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:34.860819       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:44.858769       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:44.858890       1 main.go:301] handling current node
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:44.858912       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:44.858920       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:44.859720       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:44.859809       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:54.855090       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:54.855134       1 main.go:301] handling current node
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:54.855151       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:54.855157       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:54.855561       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:54.855573       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:04.854121       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:04.854237       1 main.go:301] handling current node
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:04.854256       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:04.854263       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:04.854424       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:04.854452       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:04.854544       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.29.206.88 Flags: [] Table: 0 Realm: 0} 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:14.853651       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:14.853750       1 main.go:301] handling current node
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:14.853771       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:14.853778       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:14.854005       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:14.854084       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:24.854114       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:24.854161       1 main.go:301] handling current node
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:24.854212       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:24.854223       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:24.854591       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:24.854666       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:34.862705       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:34.862793       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:34.863105       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:34.863140       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:34.863334       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:34.863362       1 main.go:301] handling current node
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:44.855275       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:44.855421       1 main.go:301] handling current node
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:44.855462       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:44.855496       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:44.856579       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:44.856690       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:54.856288       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:54.856579       1 main.go:301] handling current node
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:31:54.856914       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:31:54.857065       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:31:54.857508       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:31:54.857553       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:04.853556       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:04.853630       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:04.854583       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:04.854615       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:04.857114       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:04.857217       1 main.go:301] handling current node
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:14.854183       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:14.854348       1 main.go:301] handling current node
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:14.854376       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:14.854402       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:14.854890       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:14.854992       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:24.853770       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:24.854222       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:24.854498       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:24.854573       1 main.go:301] handling current node
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:24.854606       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:24.854613       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:34.853556       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:34.853715       1 main.go:301] handling current node
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:34.853749       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:34.853879       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:34.854386       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:34.854469       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:44.853378       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:44.853424       1 main.go:301] handling current node
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:44.853441       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:44.853447       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:44.853735       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:44.853765       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:54.859317       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:54.859396       1 main.go:301] handling current node
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:54.859415       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:54.859421       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:54.859756       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:54.859853       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:33:04.861975       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:33:04.862085       1 main.go:301] handling current node
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:33:04.862106       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:33:04.862113       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:33:04.862780       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:33:04.862861       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:33:14.853823       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:33:14.853859       1 main.go:301] handling current node
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:33:14.853877       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:33:14.853884       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:33:14.854153       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:33:14.854165       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.908680    9948 logs.go:123] Gathering logs for container status ...
	I0127 12:36:54.908680    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:36:54.963794    9948 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0127 12:36:54.963794    9948 command_runner.go:130] > 528243cca8bfb       8c811b4aec35f                                                                                         7 seconds ago        Running             busybox                   1                   ef504f99724cb       busybox-58667487b6-2jq9j
	I0127 12:36:54.963794    9948 command_runner.go:130] > b3a9ed6e130c0       c69fa2e9cbf5f                                                                                         7 seconds ago        Running             coredns                   1                   6b22dbb5ef3e0       coredns-668d6bf9bc-2qw6w
	I0127 12:36:54.963794    9948 command_runner.go:130] > 389606c183b19       6e38f40d628db                                                                                         27 seconds ago       Running             storage-provisioner       2                   b613e9a7a3565       storage-provisioner
	I0127 12:36:54.963794    9948 command_runner.go:130] > 373bec67270fb       50415e5d05f05                                                                                         About a minute ago   Running             kindnet-cni               1                   d43e4cc62e087       kindnet-z2hqq
	I0127 12:36:54.963794    9948 command_runner.go:130] > 9b2db1d0cb61c       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   b613e9a7a3565       storage-provisioner
	I0127 12:36:54.963794    9948 command_runner.go:130] > 0283b35dee3cc       e29f9c7391fd9                                                                                         About a minute ago   Running             kube-proxy                1                   34d579bb511fe       kube-proxy-s46mv
	I0127 12:36:54.963794    9948 command_runner.go:130] > ea993630a3109       95c0bda56fc4d                                                                                         About a minute ago   Running             kube-apiserver            0                   5601285bb260a       kube-apiserver-multinode-659000
	I0127 12:36:54.963794    9948 command_runner.go:130] > 0ef2a3b50bae8       a9e7e6b294baf                                                                                         About a minute ago   Running             etcd                      0                   cdf534e99b2bb       etcd-multinode-659000
	I0127 12:36:54.963794    9948 command_runner.go:130] > ed51c7eaa9666       2b0d6572d062c                                                                                         About a minute ago   Running             kube-scheduler            1                   910315897d842       kube-scheduler-multinode-659000
	I0127 12:36:54.963794    9948 command_runner.go:130] > 8d4872cda28de       019ee182b58e2                                                                                         About a minute ago   Running             kube-controller-manager   1                   b770a357d9830       kube-controller-manager-multinode-659000
	I0127 12:36:54.963794    9948 command_runner.go:130] > 998a64b2baa2d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   4c82c0ec4aeaa       busybox-58667487b6-2jq9j
	I0127 12:36:54.963794    9948 command_runner.go:130] > f818dd15d8b02       c69fa2e9cbf5f                                                                                         24 minutes ago       Exited              coredns                   0                   4a53e133a1cd6       coredns-668d6bf9bc-2qw6w
	I0127 12:36:54.963794    9948 command_runner.go:130] > d758000dda95d       kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108              24 minutes ago       Exited              kindnet-cni               0                   f2d0bd65fe50d       kindnet-z2hqq
	I0127 12:36:54.963794    9948 command_runner.go:130] > bbec7ccef7da5       e29f9c7391fd9                                                                                         24 minutes ago       Exited              kube-proxy                0                   319cddeebceb6       kube-proxy-s46mv
	I0127 12:36:54.963794    9948 command_runner.go:130] > a16e06a038601       2b0d6572d062c                                                                                         25 minutes ago       Exited              kube-scheduler            0                   5423fc5113290       kube-scheduler-multinode-659000
	I0127 12:36:54.964316    9948 command_runner.go:130] > e07a66f8f6196       019ee182b58e2                                                                                         25 minutes ago       Exited              kube-controller-manager   0                   1bd5bf99bede3       kube-controller-manager-multinode-659000
	I0127 12:36:54.966091    9948 logs.go:123] Gathering logs for dmesg ...
	I0127 12:36:54.966091    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:36:54.984232    9948 command_runner.go:130] > [Jan27 12:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0127 12:36:54.984232    9948 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0127 12:36:54.984319    9948 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0127 12:36:54.984319    9948 command_runner.go:130] > [  +0.124628] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0127 12:36:54.984319    9948 command_runner.go:130] > [  +0.022511] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0127 12:36:54.984381    9948 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0127 12:36:54.984381    9948 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0127 12:36:54.984381    9948 command_runner.go:130] > [  +0.069272] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0127 12:36:54.984425    9948 command_runner.go:130] > [  +0.020914] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0127 12:36:54.984425    9948 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0127 12:36:54.984476    9948 command_runner.go:130] > [Jan27 12:34] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0127 12:36:54.984476    9948 command_runner.go:130] > [  +0.706235] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0127 12:36:54.984476    9948 command_runner.go:130] > [  +1.791193] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0127 12:36:54.984518    9948 command_runner.go:130] > [  +6.780102] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0127 12:36:54.984536    9948 command_runner.go:130] > [  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0127 12:36:54.984568    9948 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0127 12:36:54.984568    9948 command_runner.go:130] > [Jan27 12:35] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	I0127 12:36:54.984609    9948 command_runner.go:130] > [  +0.194598] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	I0127 12:36:54.984642    9948 command_runner.go:130] > [ +25.881577] systemd-fstab-generator[1029]: Ignoring "noauto" option for root device
	I0127 12:36:54.984642    9948 command_runner.go:130] > [  +0.104839] kauditd_printk_skb: 75 callbacks suppressed
	I0127 12:36:54.984696    9948 command_runner.go:130] > [  +0.497850] systemd-fstab-generator[1069]: Ignoring "noauto" option for root device
	I0127 12:36:54.984696    9948 command_runner.go:130] > [  +0.189754] systemd-fstab-generator[1081]: Ignoring "noauto" option for root device
	I0127 12:36:54.984696    9948 command_runner.go:130] > [  +0.209865] systemd-fstab-generator[1095]: Ignoring "noauto" option for root device
	I0127 12:36:54.984749    9948 command_runner.go:130] > [  +2.995294] systemd-fstab-generator[1337]: Ignoring "noauto" option for root device
	I0127 12:36:54.984766    9948 command_runner.go:130] > [  +0.193187] systemd-fstab-generator[1349]: Ignoring "noauto" option for root device
	I0127 12:36:54.984798    9948 command_runner.go:130] > [  +0.167597] systemd-fstab-generator[1361]: Ignoring "noauto" option for root device
	I0127 12:36:54.984798    9948 command_runner.go:130] > [  +0.247752] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	I0127 12:36:54.984798    9948 command_runner.go:130] > [  +0.858687] systemd-fstab-generator[1500]: Ignoring "noauto" option for root device
	I0127 12:36:54.984798    9948 command_runner.go:130] > [  +0.090112] kauditd_printk_skb: 206 callbacks suppressed
	I0127 12:36:54.984798    9948 command_runner.go:130] > [  +3.380441] systemd-fstab-generator[1641]: Ignoring "noauto" option for root device
	I0127 12:36:54.984858    9948 command_runner.go:130] > [  +1.786352] kauditd_printk_skb: 64 callbacks suppressed
	I0127 12:36:54.984884    9948 command_runner.go:130] > [  +5.236723] kauditd_printk_skb: 10 callbacks suppressed
	I0127 12:36:54.984884    9948 command_runner.go:130] > [  +4.105586] systemd-fstab-generator[2522]: Ignoring "noauto" option for root device
	I0127 12:36:54.984884    9948 command_runner.go:130] > [Jan27 12:36] kauditd_printk_skb: 70 callbacks suppressed
	I0127 12:36:54.986509    9948 logs.go:123] Gathering logs for kube-proxy [0283b35dee3c] ...
	I0127 12:36:54.986581    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0283b35dee3c"
	I0127 12:36:55.011717    9948 command_runner.go:130] ! I0127 12:35:44.449716       1 server_linux.go:66] "Using iptables proxy"
	I0127 12:36:55.011893    9948 command_runner.go:130] ! E0127 12:35:44.569403       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0127 12:36:55.011893    9948 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0127 12:36:55.011893    9948 command_runner.go:130] ! 	add table ip kube-proxy
	I0127 12:36:55.011959    9948 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:55.011959    9948 command_runner.go:130] !  >
	I0127 12:36:55.011959    9948 command_runner.go:130] ! E0127 12:35:44.599245       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0127 12:36:55.011959    9948 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0127 12:36:55.011959    9948 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0127 12:36:55.011959    9948 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:55.011959    9948 command_runner.go:130] !  >
	I0127 12:36:55.012016    9948 command_runner.go:130] ! I0127 12:35:44.767652       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.198.106"]
	I0127 12:36:55.012059    9948 command_runner.go:130] ! E0127 12:35:44.770299       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 12:36:55.012059    9948 command_runner.go:130] ! I0127 12:35:45.038438       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 12:36:55.012119    9948 command_runner.go:130] ! I0127 12:35:45.038556       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 12:36:55.012143    9948 command_runner.go:130] ! I0127 12:35:45.038587       1 server_linux.go:170] "Using iptables Proxier"
	I0127 12:36:55.012171    9948 command_runner.go:130] ! I0127 12:35:45.043111       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 12:36:55.012171    9948 command_runner.go:130] ! I0127 12:35:45.045042       1 server.go:497] "Version info" version="v1.32.1"
	I0127 12:36:55.012207    9948 command_runner.go:130] ! I0127 12:35:45.045375       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:55.012231    9948 command_runner.go:130] ! I0127 12:35:45.053262       1 config.go:199] "Starting service config controller"
	I0127 12:36:55.012260    9948 command_runner.go:130] ! I0127 12:35:45.054808       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 12:36:55.012260    9948 command_runner.go:130] ! I0127 12:35:45.054873       1 config.go:329] "Starting node config controller"
	I0127 12:36:55.012260    9948 command_runner.go:130] ! I0127 12:35:45.054880       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 12:36:55.012260    9948 command_runner.go:130] ! I0127 12:35:45.058308       1 config.go:105] "Starting endpoint slice config controller"
	I0127 12:36:55.012260    9948 command_runner.go:130] ! I0127 12:35:45.058492       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 12:36:55.012260    9948 command_runner.go:130] ! I0127 12:35:45.155116       1 shared_informer.go:320] Caches are synced for node config
	I0127 12:36:55.012260    9948 command_runner.go:130] ! I0127 12:35:45.155116       1 shared_informer.go:320] Caches are synced for service config
	I0127 12:36:55.012260    9948 command_runner.go:130] ! I0127 12:35:45.159566       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 12:36:57.516042    9948 api_server.go:253] Checking apiserver healthz at https://172.29.198.106:8443/healthz ...
	I0127 12:36:57.526317    9948 api_server.go:279] https://172.29.198.106:8443/healthz returned 200:
	ok
	I0127 12:36:57.526857    9948 round_trippers.go:463] GET https://172.29.198.106:8443/version
	I0127 12:36:57.526857    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:57.526857    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:57.526857    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:57.528764    9948 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0127 12:36:57.528764    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:57.528764    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:57 GMT
	I0127 12:36:57.528764    9948 round_trippers.go:580]     Audit-Id: edec2ca6-9776-4d7e-8c95-dd9009a1e93c
	I0127 12:36:57.528764    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:57.528764    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:57.528764    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:57.528764    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:57.528764    9948 round_trippers.go:580]     Content-Length: 263
	I0127 12:36:57.528764    9948 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "32",
	  "gitVersion": "v1.32.1",
	  "gitCommit": "e9c9be4007d1664e68796af02b8978640d2c1b26",
	  "gitTreeState": "clean",
	  "buildDate": "2025-01-15T14:31:55Z",
	  "goVersion": "go1.23.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0127 12:36:57.528764    9948 api_server.go:141] control plane version: v1.32.1
	I0127 12:36:57.528764    9948 api_server.go:131] duration metric: took 3.6379845s to wait for apiserver health ...
	I0127 12:36:57.528764    9948 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:36:57.539654    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 12:36:57.569539    9948 command_runner.go:130] > ea993630a310
	I0127 12:36:57.569607    9948 logs.go:282] 1 containers: [ea993630a310]
	I0127 12:36:57.578470    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 12:36:57.607192    9948 command_runner.go:130] > 0ef2a3b50bae
	I0127 12:36:57.608336    9948 logs.go:282] 1 containers: [0ef2a3b50bae]
	I0127 12:36:57.616888    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 12:36:57.643533    9948 command_runner.go:130] > b3a9ed6e130c
	I0127 12:36:57.644282    9948 command_runner.go:130] > f818dd15d8b0
	I0127 12:36:57.644282    9948 logs.go:282] 2 containers: [b3a9ed6e130c f818dd15d8b0]
	I0127 12:36:57.654299    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 12:36:57.682621    9948 command_runner.go:130] > ed51c7eaa966
	I0127 12:36:57.682621    9948 command_runner.go:130] > a16e06a03860
	I0127 12:36:57.682621    9948 logs.go:282] 2 containers: [ed51c7eaa966 a16e06a03860]
	I0127 12:36:57.691785    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 12:36:57.718783    9948 command_runner.go:130] > 0283b35dee3c
	I0127 12:36:57.718783    9948 command_runner.go:130] > bbec7ccef7da
	I0127 12:36:57.720841    9948 logs.go:282] 2 containers: [0283b35dee3c bbec7ccef7da]
	I0127 12:36:57.733117    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 12:36:57.757712    9948 command_runner.go:130] > 8d4872cda28d
	I0127 12:36:57.758083    9948 command_runner.go:130] > e07a66f8f619
	I0127 12:36:57.758083    9948 logs.go:282] 2 containers: [8d4872cda28d e07a66f8f619]
	I0127 12:36:57.768668    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0127 12:36:57.794534    9948 command_runner.go:130] > 373bec67270f
	I0127 12:36:57.794534    9948 command_runner.go:130] > d758000dda95
	I0127 12:36:57.794534    9948 logs.go:282] 2 containers: [373bec67270f d758000dda95]
	I0127 12:36:57.794534    9948 logs.go:123] Gathering logs for kubelet ...
	I0127 12:36:57.794721    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:36:57.825846    9948 command_runner.go:130] > Jan 27 12:35:32 multinode-659000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0127 12:36:57.825846    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1507]: I0127 12:35:33.096330    1507 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0127 12:36:57.825846    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1507]: I0127 12:35:33.097069    1507 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:57.825846    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1507]: I0127 12:35:33.098504    1507 server.go:954] "Client rotation is on, will bootstrap in background"
	I0127 12:36:57.826027    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1507]: E0127 12:35:33.099084    1507 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0127 12:36:57.826027    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:57.826154    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0127 12:36:57.826154    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0127 12:36:57.826154    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0127 12:36:57.826154    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0127 12:36:57.826277    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1565]: I0127 12:35:33.855505    1565 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0127 12:36:57.826277    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1565]: I0127 12:35:33.856023    1565 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:57.826277    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1565]: I0127 12:35:33.856456    1565 server.go:954] "Client rotation is on, will bootstrap in background"
	I0127 12:36:57.826376    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1565]: E0127 12:35:33.856573    1565 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0127 12:36:57.826616    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:57.826616    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0127 12:36:57.826616    9948 command_runner.go:130] > Jan 27 12:35:34 multinode-659000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0127 12:36:57.826762    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0127 12:36:57.826762    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.167839    1648 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0127 12:36:57.826762    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.168570    1648 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:57.826762    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.169526    1648 server.go:954] "Client rotation is on, will bootstrap in background"
	I0127 12:36:57.827036    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.171330    1648 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0127 12:36:57.827218    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.190537    1648 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:57.827292    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.208219    1648 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
	I0127 12:36:57.827370    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.208354    1648 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
	I0127 12:36:57.827370    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.217489    1648 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0127 12:36:57.827462    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.217603    1648 server.go:841] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0127 12:36:57.827561    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.218319    1648 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0127 12:36:57.827663    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.218396    1648 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-659000","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I0127 12:36:57.827663    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.218720    1648 topology_manager.go:138] "Creating topology manager with none policy"
	I0127 12:36:57.827771    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.218780    1648 container_manager_linux.go:304] "Creating device plugin manager"
	I0127 12:36:57.827771    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.219430    1648 state_mem.go:36] "Initialized new in-memory state store"
	I0127 12:36:57.827771    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.221396    1648 kubelet.go:446] "Attempting to sync node with API server"
	I0127 12:36:57.827898    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.221465    1648 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0127 12:36:57.827898    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.221524    1648 kubelet.go:352] "Adding apiserver pod source"
	I0127 12:36:57.827898    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.221568    1648 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0127 12:36:57.828004    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.230949    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-659000&limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:57.828004    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.231085    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-659000&limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:57.828123    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.232363    1648 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="docker" version="27.4.0" apiVersion="v1"
	I0127 12:36:57.828123    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.236967    1648 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0127 12:36:57.828224    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.237190    1648 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0127 12:36:57.828224    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.245589    1648 watchdog_linux.go:99] "Systemd watchdog is not enabled"
	I0127 12:36:57.828316    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.245760    1648 server.go:1287] "Started kubelet"
	I0127 12:36:57.828417    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.246317    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:57.828417    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.246411    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:57.828521    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.246814    1648 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0127 12:36:57.828521    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.247495    1648 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0127 12:36:57.828620    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.249106    1648 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0127 12:36:57.828620    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.260914    1648 server.go:169] "Starting to listen" address="0.0.0.0" port=10250
	I0127 12:36:57.828720    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.262947    1648 server.go:490] "Adding debug handlers to kubelet server"
	I0127 12:36:57.828720    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.264052    1648 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I0127 12:36:57.828720    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.267083    1648 volume_manager.go:297] "Starting Kubelet Volume Manager"
	I0127 12:36:57.828822    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.267485    1648 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"multinode-659000\" not found"
	I0127 12:36:57.828930    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.270946    1648 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.29.198.106:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-659000.181e8cd12d2fa1af  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-659000,UID:multinode-659000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-659000,},FirstTimestamp:2025-01-27 12:35:36.245739951 +0000 UTC m=+0.150414507,LastTimestamp:2025-01-27 12:35:36.245739951 +0000 UTC m=+0.150414507,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-6
59000,}"
	I0127 12:36:57.829030    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.275270    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-659000?timeout=10s\": dial tcp 172.29.198.106:8443: connect: connection refused" interval="200ms"
	I0127 12:36:57.829082    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.275715    1648 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0127 12:36:57.829135    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.280615    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:57.829170    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.280911    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:57.829250    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.282354    1648 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0127 12:36:57.829250    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.282424    1648 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0127 12:36:57.829363    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.282441    1648 factory.go:221] Registration of the systemd container factory successfully
	I0127 12:36:57.829363    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.345823    1648 reconciler.go:26] "Reconciler: start to sync state"
	I0127 12:36:57.829478    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.348883    1648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0127 12:36:57.829478    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.352701    1648 cpu_manager.go:221] "Starting CPU manager" policy="none"
	I0127 12:36:57.829478    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.352736    1648 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s"
	I0127 12:36:57.829602    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.352866    1648 state_mem.go:36] "Initialized new in-memory state store"
	I0127 12:36:57.829602    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353577    1648 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0127 12:36:57.829705    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353729    1648 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0127 12:36:57.829705    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353769    1648 policy_none.go:49] "None policy: Start"
	I0127 12:36:57.829705    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353902    1648 memory_manager.go:186] "Starting memorymanager" policy="None"
	I0127 12:36:57.829705    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353967    1648 state_mem.go:35] "Initializing new in-memory state store"
	I0127 12:36:57.829830    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.354751    1648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0127 12:36:57.829894    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.354791    1648 status_manager.go:227] "Starting to sync pod status with apiserver"
	I0127 12:36:57.829894    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.354811    1648 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I0127 12:36:57.830000    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.354819    1648 kubelet.go:2388] "Starting kubelet main sync loop"
	I0127 12:36:57.830000    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.354862    1648 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0127 12:36:57.830137    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.355393    1648 state_mem.go:75] "Updated machine memory state"
	I0127 12:36:57.830137    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.358802    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:57.830237    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.358857    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:57.830237    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.371233    1648 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"multinode-659000\" not found"
	I0127 12:36:57.830337    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.373395    1648 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0127 12:36:57.830337    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.373786    1648 eviction_manager.go:189] "Eviction manager: starting control loop"
	I0127 12:36:57.830444    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.373887    1648 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0127 12:36:57.830444    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.380088    1648 iptables.go:577] "Could not set up iptables canary" err=<
	I0127 12:36:57.830444    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0127 12:36:57.830543    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0127 12:36:57.830543    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0127 12:36:57.830543    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0127 12:36:57.830642    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.380760    1648 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I0127 12:36:57.830642    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.380984    1648 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-659000\" not found"
	I0127 12:36:57.830730    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.382902    1648 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0127 12:36:57.830821    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.468172    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.830821    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.468821    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c82c0ec4aeaa9b21462a8248326ae982d6f7a0aee31347f1a58d216f0335177"
	I0127 12:36:57.830937    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.468934    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2d0bd65fe50d3b8a64acf8ee065aa49d1a51b768c5fe6fe9532d26fa35aa7b1"
	I0127 12:36:57.830937    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.468988    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bd5bf99bede3e691e572fc4b8a37f4f42f8a9b2520adf8bc87bdf76e8258a4b"
	I0127 12:36:57.830937    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.469050    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5423fc5113290b937df9b531c5fbd748c5d927fd5e170e8126b67bae6a814384"
	I0127 12:36:57.831043    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.470252    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.831139    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.475717    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:57.831139    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.477090    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-659000?timeout=10s\": dial tcp 172.29.198.106:8443: connect: connection refused" interval="400ms"
	I0127 12:36:57.831278    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.480196    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.198.106:8443: connect: connection refused" node="multinode-659000"
	I0127 12:36:57.831278    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.487429    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc9ef8ee86ec2e354006c4c56f82fe9ec4df472096628ad620faba06fa0b1ff8"
	I0127 12:36:57.831393    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.508448    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a53e133a1cd6ab9514cb15ac3c4f1d5683d17008b482cebb08bf4809e060709"
	I0127 12:36:57.831393    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.523288    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="319cddeebceb6ec82b5865f1c67eaf88948a282ace1113869910f5bf8c717d83"
	I0127 12:36:57.831491    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.545844    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b522c4c9f4c776ea35298b9eaf7c05d64bddd6f385e12252bdf6aada9a3e20d"
	I0127 12:36:57.831491    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.566476    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e6c90fc43fa6c0754218ff1c4162045d-kubeconfig\") pod \"kube-scheduler-multinode-659000\" (UID: \"e6c90fc43fa6c0754218ff1c4162045d\") " pod="kube-system/kube-scheduler-multinode-659000"
	I0127 12:36:57.831589    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.566534    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b9fbd177058ba298cde2a92c4ef5c601-k8s-certs\") pod \"kube-apiserver-multinode-659000\" (UID: \"b9fbd177058ba298cde2a92c4ef5c601\") " pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:57.831683    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.566560    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-kubeconfig\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:57.831683    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567472    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:57.831799    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567527    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/575cefa3aa8017dce576fa244e719a4e-etcd-certs\") pod \"etcd-multinode-659000\" (UID: \"575cefa3aa8017dce576fa244e719a4e\") " pod="kube-system/etcd-multinode-659000"
	I0127 12:36:57.831898    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567546    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/575cefa3aa8017dce576fa244e719a4e-etcd-data\") pod \"etcd-multinode-659000\" (UID: \"575cefa3aa8017dce576fa244e719a4e\") " pod="kube-system/etcd-multinode-659000"
	I0127 12:36:57.831981    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567563    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b9fbd177058ba298cde2a92c4ef5c601-ca-certs\") pod \"kube-apiserver-multinode-659000\" (UID: \"b9fbd177058ba298cde2a92c4ef5c601\") " pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:57.832030    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567580    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-ca-certs\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:57.832143    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567687    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-flexvolume-dir\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:57.832191    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567720    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-k8s-certs\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:57.832302    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567745    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b9fbd177058ba298cde2a92c4ef5c601-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-659000\" (UID: \"b9fbd177058ba298cde2a92c4ef5c601\") " pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:57.832302    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567166    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51ee4649b24aa281b3767c049c3c1d4063e516b98501648152da39ee45cb0b26"
	I0127 12:36:57.832302    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.569350    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.832302    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.570289    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.832302    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.681872    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:57.832302    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.682569    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.198.106:8443: connect: connection refused" node="multinode-659000"
	I0127 12:36:57.832302    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.878668    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-659000?timeout=10s\": dial tcp 172.29.198.106:8443: connect: connection refused" interval="800ms"
	I0127 12:36:57.832302    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: W0127 12:35:37.056372    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:57.832302    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.056534    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:57.832302    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: I0127 12:35:37.084276    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:57.832302    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.085344    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.198.106:8443: connect: connection refused" node="multinode-659000"
	I0127 12:36:57.832302    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: W0127 12:35:37.281985    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:57.832850    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.282078    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:57.832975    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: W0127 12:35:37.629266    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-659000&limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:57.833026    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.629409    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-659000&limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:57.833157    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: W0127 12:35:37.673700    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:57.833205    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.673876    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:57.833298    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.680515    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-659000?timeout=10s\": dial tcp 172.29.198.106:8443: connect: connection refused" interval="1.6s"
	I0127 12:36:57.833342    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: I0127 12:35:37.887498    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:57.833389    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.888458    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.198.106:8443: connect: connection refused" node="multinode-659000"
	I0127 12:36:57.833436    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: E0127 12:35:39.058364    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.833484    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: E0127 12:35:39.084210    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.833575    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: E0127 12:35:39.099659    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.833659    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: E0127 12:35:39.112572    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: I0127 12:35:39.489967    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:40 multinode-659000 kubelet[1648]: E0127 12:35:40.123734    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:40 multinode-659000 kubelet[1648]: E0127 12:35:40.124212    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:40 multinode-659000 kubelet[1648]: E0127 12:35:40.124507    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:40 multinode-659000 kubelet[1648]: E0127 12:35:40.124790    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.138584    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.139346    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.139719    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.469180    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.513020    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-multinode-659000\" already exists" pod="kube-system/etcd-multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.513064    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.538800    1648 kubelet_node_status.go:125] "Node was previously registered" node="multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.538905    1648 kubelet_node_status.go:79] "Successfully registered node" node="multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.538949    1648 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.539897    1648 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.540655    1648 setters.go:602] "Node became not ready" node="multinode-659000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-27T12:35:41Z","lastTransitionTime":"2025-01-27T12:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.555833    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-multinode-659000\" already exists" pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.555924    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.574323    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-multinode-659000\" already exists" pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.574484    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.589698    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-multinode-659000\" already exists" pod="kube-system/kube-scheduler-multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.247993    1648 apiserver.go:52] "Watching apiserver"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.255092    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-659000" podUID="f19e9efc-57cc-4e2a-b365-920592a7f352"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.257281    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.834245    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.257504    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.834292    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.261197    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/etcd-multinode-659000" podUID="d2a9c448-86a1-48e3-8b48-345c937e5bb4"
	I0127 12:36:57.834340    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.277187    1648 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0127 12:36:57.834387    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.304401    1648 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/etcd-multinode-659000"
	I0127 12:36:57.834434    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.304607    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-659000"
	I0127 12:36:57.834479    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.309849    1648 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:57.834526    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.309963    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:57.834578    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.343249    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae3b8daf-d674-4cfe-8652-cb5ff6ba8615-lib-modules\") pod \"kube-proxy-s46mv\" (UID: \"ae3b8daf-d674-4cfe-8652-cb5ff6ba8615\") " pod="kube-system/kube-proxy-s46mv"
	I0127 12:36:57.834668    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.343617    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9b617a9c-e2b8-45fd-bee2-45cb03d4cd42-cni-cfg\") pod \"kindnet-z2hqq\" (UID: \"9b617a9c-e2b8-45fd-bee2-45cb03d4cd42\") " pod="kube-system/kindnet-z2hqq"
	I0127 12:36:57.834712    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.343779    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b617a9c-e2b8-45fd-bee2-45cb03d4cd42-lib-modules\") pod \"kindnet-z2hqq\" (UID: \"9b617a9c-e2b8-45fd-bee2-45cb03d4cd42\") " pod="kube-system/kindnet-z2hqq"
	I0127 12:36:57.834801    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.343961    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae3b8daf-d674-4cfe-8652-cb5ff6ba8615-xtables-lock\") pod \"kube-proxy-s46mv\" (UID: \"ae3b8daf-d674-4cfe-8652-cb5ff6ba8615\") " pod="kube-system/kube-proxy-s46mv"
	I0127 12:36:57.834844    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.344263    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b617a9c-e2b8-45fd-bee2-45cb03d4cd42-xtables-lock\") pod \"kindnet-z2hqq\" (UID: \"9b617a9c-e2b8-45fd-bee2-45cb03d4cd42\") " pod="kube-system/kindnet-z2hqq"
	I0127 12:36:57.834930    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.344443    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bcfd7913-1bc0-4c24-882f-2be92ec9b046-tmp\") pod \"storage-provisioner\" (UID: \"bcfd7913-1bc0-4c24-882f-2be92ec9b046\") " pod="kube-system/storage-provisioner"
	I0127 12:36:57.834974    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.345456    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:57.835080    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.345573    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:42.845554363 +0000 UTC m=+6.750229019 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:57.835080    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.362165    1648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bf31ca1befb4fb3e8f2fd27458a3b80" path="/var/lib/kubelet/pods/6bf31ca1befb4fb3e8f2fd27458a3b80/volumes"
	I0127 12:36:57.835194    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.363294    1648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7291ea72d8be6e47ed8b536906d73549" path="/var/lib/kubelet/pods/7291ea72d8be6e47ed8b536906d73549/volumes"
	I0127 12:36:57.835243    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.396667    1648 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	I0127 12:36:57.835336    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.400478    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.835380    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.400505    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.835487    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.400550    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:42.900534148 +0000 UTC m=+6.805208804 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.835606    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.494698    1648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-659000" podStartSLOduration=0.494540064 podStartE2EDuration="494.540064ms" podCreationTimestamp="2025-01-27 12:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 12:35:42.473709794 +0000 UTC m=+6.378384350" watchObservedRunningTime="2025-01-27 12:35:42.494540064 +0000 UTC m=+6.399214620"
	I0127 12:36:57.835719    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.494964    1648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-659000" podStartSLOduration=0.494955765 podStartE2EDuration="494.955765ms" podCreationTimestamp="2025-01-27 12:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 12:35:42.493805361 +0000 UTC m=+6.398480017" watchObservedRunningTime="2025-01-27 12:35:42.494955765 +0000 UTC m=+6.399630321"
	I0127 12:36:57.835813    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.849608    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:57.835908    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.849827    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:43.849803559 +0000 UTC m=+7.754478115 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:57.835958    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.951539    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.836004    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.951579    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.836124    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.951637    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:43.951620201 +0000 UTC m=+7.856294757 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.836177    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: I0127 12:35:43.230846    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b613e9a7a356580fd5381e358408317fd6120a119c23f3f196adda302e5ca97f"
	I0127 12:36:57.836227    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: I0127 12:35:43.240666    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34d579bb511fec290478f20b13002063b43c1a71bd6f2f45f1d83bbd8ac971ab"
	I0127 12:36:57.836279    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.588436    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.836377    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: I0127 12:35:43.594121    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d43e4cc62e0877d4b65191623d58195cd33c60eff33c6e49e605f69620d5115f"
	I0127 12:36:57.836425    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: I0127 12:35:43.594816    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-659000" podUID="f19e9efc-57cc-4e2a-b365-920592a7f352"
	I0127 12:36:57.836493    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.861607    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:57.836605    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.861754    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:45.861734662 +0000 UTC m=+9.766409318 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:57.836651    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.962791    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.836701    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.962845    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.836794    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.963033    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:45.962955102 +0000 UTC m=+9.867629758 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.836886    9948 command_runner.go:130] > Jan 27 12:35:44 multinode-659000 kubelet[1648]: E0127 12:35:44.356390    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.836949    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.355639    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.836997    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.883867    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:57.837234    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.883991    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:49.883972962 +0000 UTC m=+13.788647618 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:57.837295    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.984260    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.837295    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.984313    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.837295    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.984377    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:49.984359299 +0000 UTC m=+13.889033855 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.837295    9948 command_runner.go:130] > Jan 27 12:35:46 multinode-659000 kubelet[1648]: E0127 12:35:46.358731    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.837295    9948 command_runner.go:130] > Jan 27 12:35:46 multinode-659000 kubelet[1648]: E0127 12:35:46.386967    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:57.837295    9948 command_runner.go:130] > Jan 27 12:35:47 multinode-659000 kubelet[1648]: E0127 12:35:47.355582    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.837295    9948 command_runner.go:130] > Jan 27 12:35:48 multinode-659000 kubelet[1648]: E0127 12:35:48.356308    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.837295    9948 command_runner.go:130] > Jan 27 12:35:49 multinode-659000 kubelet[1648]: E0127 12:35:49.356027    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.837295    9948 command_runner.go:130] > Jan 27 12:35:49 multinode-659000 kubelet[1648]: E0127 12:35:49.925365    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:57.837295    9948 command_runner.go:130] > Jan 27 12:35:49 multinode-659000 kubelet[1648]: E0127 12:35:49.925459    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:57.925443152 +0000 UTC m=+21.830117808 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:57.837295    9948 command_runner.go:130] > Jan 27 12:35:50 multinode-659000 kubelet[1648]: E0127 12:35:50.027100    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.837295    9948 command_runner.go:130] > Jan 27 12:35:50 multinode-659000 kubelet[1648]: E0127 12:35:50.027219    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.838030    9948 command_runner.go:130] > Jan 27 12:35:50 multinode-659000 kubelet[1648]: E0127 12:35:50.027346    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:58.027289813 +0000 UTC m=+21.931964469 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.838141    9948 command_runner.go:130] > Jan 27 12:35:50 multinode-659000 kubelet[1648]: E0127 12:35:50.355319    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.838191    9948 command_runner.go:130] > Jan 27 12:35:51 multinode-659000 kubelet[1648]: E0127 12:35:51.356503    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.838290    9948 command_runner.go:130] > Jan 27 12:35:51 multinode-659000 kubelet[1648]: E0127 12:35:51.388594    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:57.838358    9948 command_runner.go:130] > Jan 27 12:35:52 multinode-659000 kubelet[1648]: E0127 12:35:52.357390    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.838477    9948 command_runner.go:130] > Jan 27 12:35:53 multinode-659000 kubelet[1648]: E0127 12:35:53.355568    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.838605    9948 command_runner.go:130] > Jan 27 12:35:54 multinode-659000 kubelet[1648]: E0127 12:35:54.355531    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.838657    9948 command_runner.go:130] > Jan 27 12:35:55 multinode-659000 kubelet[1648]: E0127 12:35:55.356228    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.838787    9948 command_runner.go:130] > Jan 27 12:35:56 multinode-659000 kubelet[1648]: E0127 12:35:56.355726    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.838841    9948 command_runner.go:130] > Jan 27 12:35:56 multinode-659000 kubelet[1648]: E0127 12:35:56.392446    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:57.838902    9948 command_runner.go:130] > Jan 27 12:35:57 multinode-659000 kubelet[1648]: E0127 12:35:57.355790    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.838965    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.001233    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:57.839117    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.001401    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:36:14.001383565 +0000 UTC m=+37.906058121 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:57.839164    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.101493    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.839233    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.101659    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.839300    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.101748    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:36:14.101732786 +0000 UTC m=+38.006407342 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.839411    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.365026    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.839463    9948 command_runner.go:130] > Jan 27 12:35:59 multinode-659000 kubelet[1648]: E0127 12:35:59.356031    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.839523    9948 command_runner.go:130] > Jan 27 12:36:00 multinode-659000 kubelet[1648]: E0127 12:36:00.356282    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.839523    9948 command_runner.go:130] > Jan 27 12:36:01 multinode-659000 kubelet[1648]: E0127 12:36:01.356209    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.839523    9948 command_runner.go:130] > Jan 27 12:36:01 multinode-659000 kubelet[1648]: E0127 12:36:01.394292    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:57.839523    9948 command_runner.go:130] > Jan 27 12:36:02 multinode-659000 kubelet[1648]: E0127 12:36:02.355777    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.839523    9948 command_runner.go:130] > Jan 27 12:36:03 multinode-659000 kubelet[1648]: E0127 12:36:03.356166    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.839523    9948 command_runner.go:130] > Jan 27 12:36:04 multinode-659000 kubelet[1648]: E0127 12:36:04.356089    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.839523    9948 command_runner.go:130] > Jan 27 12:36:05 multinode-659000 kubelet[1648]: E0127 12:36:05.355458    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.839523    9948 command_runner.go:130] > Jan 27 12:36:06 multinode-659000 kubelet[1648]: E0127 12:36:06.356120    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.839523    9948 command_runner.go:130] > Jan 27 12:36:06 multinode-659000 kubelet[1648]: E0127 12:36:06.396811    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:57.839523    9948 command_runner.go:130] > Jan 27 12:36:07 multinode-659000 kubelet[1648]: E0127 12:36:07.355573    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.839523    9948 command_runner.go:130] > Jan 27 12:36:08 multinode-659000 kubelet[1648]: E0127 12:36:08.355837    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.839523    9948 command_runner.go:130] > Jan 27 12:36:09 multinode-659000 kubelet[1648]: E0127 12:36:09.355284    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.840061    9948 command_runner.go:130] > Jan 27 12:36:10 multinode-659000 kubelet[1648]: E0127 12:36:10.356199    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.840108    9948 command_runner.go:130] > Jan 27 12:36:11 multinode-659000 kubelet[1648]: E0127 12:36:11.356023    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.840108    9948 command_runner.go:130] > Jan 27 12:36:11 multinode-659000 kubelet[1648]: E0127 12:36:11.398054    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:12 multinode-659000 kubelet[1648]: E0127 12:36:12.355492    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:13 multinode-659000 kubelet[1648]: E0127 12:36:13.356291    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.058689    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.058911    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:36:46.058858304 +0000 UTC m=+69.963532860 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.159091    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.159277    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.159495    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:36:46.15947175 +0000 UTC m=+70.064146406 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.357000    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:15 multinode-659000 kubelet[1648]: I0127 12:36:15.031682    1648 scope.go:117] "RemoveContainer" containerID="134620caeeb93fda5b32a71962e13d1994830a35b93b18ad2387296500dff7b5"
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:15 multinode-659000 kubelet[1648]: I0127 12:36:15.032024    1648 scope.go:117] "RemoveContainer" containerID="9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f"
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:15 multinode-659000 kubelet[1648]: E0127 12:36:15.032236    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bcfd7913-1bc0-4c24-882f-2be92ec9b046)\"" pod="kube-system/storage-provisioner" podUID="bcfd7913-1bc0-4c24-882f-2be92ec9b046"
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:15 multinode-659000 kubelet[1648]: E0127 12:36:15.355738    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:16 multinode-659000 kubelet[1648]: E0127 12:36:16.356191    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:16 multinode-659000 kubelet[1648]: E0127 12:36:16.399212    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:17 multinode-659000 kubelet[1648]: E0127 12:36:17.355082    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:18 multinode-659000 kubelet[1648]: E0127 12:36:18.356067    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:19 multinode-659000 kubelet[1648]: E0127 12:36:19.355675    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.840742    9948 command_runner.go:130] > Jan 27 12:36:20 multinode-659000 kubelet[1648]: E0127 12:36:20.356455    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.840790    9948 command_runner.go:130] > Jan 27 12:36:21 multinode-659000 kubelet[1648]: E0127 12:36:21.355971    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.840790    9948 command_runner.go:130] > Jan 27 12:36:21 multinode-659000 kubelet[1648]: E0127 12:36:21.401078    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:22 multinode-659000 kubelet[1648]: E0127 12:36:22.355954    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:23 multinode-659000 kubelet[1648]: E0127 12:36:23.355387    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:24 multinode-659000 kubelet[1648]: E0127 12:36:24.355437    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:25 multinode-659000 kubelet[1648]: E0127 12:36:25.356289    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:26 multinode-659000 kubelet[1648]: E0127 12:36:26.356493    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:26 multinode-659000 kubelet[1648]: E0127 12:36:26.402364    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 kubelet[1648]: E0127 12:36:27.356407    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 kubelet[1648]: I0127 12:36:27.357050    1648 scope.go:117] "RemoveContainer" containerID="9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:28 multinode-659000 kubelet[1648]: E0127 12:36:28.356371    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:29 multinode-659000 kubelet[1648]: E0127 12:36:29.355555    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:30 multinode-659000 kubelet[1648]: E0127 12:36:30.356227    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:31 multinode-659000 kubelet[1648]: E0127 12:36:31.356043    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]: I0127 12:36:36.363314    1648 scope.go:117] "RemoveContainer" containerID="5f274e5a8851d2aeb5403952c3fba0274fe53614e2e0995d1046693d7e725d5d"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]: E0127 12:36:36.393311    1648 iptables.go:577] "Could not set up iptables canary" err=<
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]: I0127 12:36:36.409087    1648 scope.go:117] "RemoveContainer" containerID="f91e9c2d3ba64a6d34c9bab7c1953b46f4006e0bb493bd1ae993c489cd76e02c"
	I0127 12:36:57.886306    9948 logs.go:123] Gathering logs for kube-apiserver [ea993630a310] ...
	I0127 12:36:57.886306    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea993630a310"
	I0127 12:36:57.916515    9948 command_runner.go:130] ! W0127 12:35:38.851605       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0127 12:36:57.916584    9948 command_runner.go:130] ! I0127 12:35:38.853397       1 options.go:238] external host was not specified, using 172.29.198.106
	I0127 12:36:57.916584    9948 command_runner.go:130] ! I0127 12:35:38.858160       1 server.go:143] Version: v1.32.1
	I0127 12:36:57.916584    9948 command_runner.go:130] ! I0127 12:35:38.858493       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:57.916584    9948 command_runner.go:130] ! I0127 12:35:39.798695       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0127 12:36:57.916584    9948 command_runner.go:130] ! I0127 12:35:39.843688       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0127 12:36:57.916711    9948 command_runner.go:130] ! I0127 12:35:39.853521       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:39.853736       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:39.854572       1 instance.go:233] Using reconciler: lease
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:39.914509       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:39.914792       1 genericapiserver.go:767] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.232206       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.232893       1 apis.go:106] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.488401       1 apis.go:106] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.610998       1 apis.go:106] API group "resource.k8s.io" is not enabled, skipping.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.646097       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.646401       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.646556       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.647499       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.647580       1 genericapiserver.go:767] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.648520       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.649666       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.649756       1 genericapiserver.go:767] Skipping API autoscaling/v2beta1 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.649766       1 genericapiserver.go:767] Skipping API autoscaling/v2beta2 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.651998       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.652100       1 genericapiserver.go:767] Skipping API batch/v1beta1 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.653327       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.653629       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.653645       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.654270       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.654362       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.654371       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1alpha2 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.655349       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.655494       1 genericapiserver.go:767] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.657969       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.658067       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.658077       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.658845       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.658940       1 genericapiserver.go:767] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.658951       1 genericapiserver.go:767] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.660043       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0127 12:36:57.917430    9948 command_runner.go:130] ! W0127 12:35:40.660172       1 genericapiserver.go:767] Skipping API policy/v1beta1 because it has no resources.
	I0127 12:36:57.917430    9948 command_runner.go:130] ! I0127 12:35:40.662431       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0127 12:36:57.917430    9948 command_runner.go:130] ! W0127 12:35:40.662519       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.917430    9948 command_runner.go:130] ! W0127 12:35:40.662531       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:57.917430    9948 command_runner.go:130] ! I0127 12:35:40.663022       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0127 12:36:57.917430    9948 command_runner.go:130] ! W0127 12:35:40.663153       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.917430    9948 command_runner.go:130] ! W0127 12:35:40.663165       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:57.917430    9948 command_runner.go:130] ! I0127 12:35:40.666344       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0127 12:36:57.917629    9948 command_runner.go:130] ! W0127 12:35:40.666495       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.917629    9948 command_runner.go:130] ! W0127 12:35:40.666521       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:57.917660    9948 command_runner.go:130] ! I0127 12:35:40.668345       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0127 12:36:57.917660    9948 command_runner.go:130] ! W0127 12:35:40.668516       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta3 because it has no resources.
	I0127 12:36:57.917708    9948 command_runner.go:130] ! W0127 12:35:40.668527       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0127 12:36:57.917739    9948 command_runner.go:130] ! W0127 12:35:40.668531       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.917739    9948 command_runner.go:130] ! I0127 12:35:40.673502       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0127 12:36:57.917767    9948 command_runner.go:130] ! W0127 12:35:40.673587       1 genericapiserver.go:767] Skipping API apps/v1beta2 because it has no resources.
	I0127 12:36:57.917767    9948 command_runner.go:130] ! W0127 12:35:40.673597       1 genericapiserver.go:767] Skipping API apps/v1beta1 because it has no resources.
	I0127 12:36:57.917841    9948 command_runner.go:130] ! I0127 12:35:40.676193       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0127 12:36:57.917841    9948 command_runner.go:130] ! W0127 12:35:40.676284       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.917841    9948 command_runner.go:130] ! W0127 12:35:40.676294       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:57.917841    9948 command_runner.go:130] ! I0127 12:35:40.677186       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0127 12:36:57.917841    9948 command_runner.go:130] ! W0127 12:35:40.677276       1 genericapiserver.go:767] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.917940    9948 command_runner.go:130] ! I0127 12:35:40.688978       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0127 12:36:57.917940    9948 command_runner.go:130] ! W0127 12:35:40.689072       1 genericapiserver.go:767] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.917940    9948 command_runner.go:130] ! I0127 12:35:41.320439       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 12:36:57.918016    9948 command_runner.go:130] ! I0127 12:35:41.320849       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:57.918016    9948 command_runner.go:130] ! I0127 12:35:41.321234       1 secure_serving.go:213] Serving securely on [::]:8443
	I0127 12:36:57.918016    9948 command_runner.go:130] ! I0127 12:35:41.321512       1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0127 12:36:57.918106    9948 command_runner.go:130] ! I0127 12:35:41.324372       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:57.918106    9948 command_runner.go:130] ! I0127 12:35:41.325924       1 controller.go:119] Starting legacy_token_tracking_controller
	I0127 12:36:57.918106    9948 command_runner.go:130] ! I0127 12:35:41.326193       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0127 12:36:57.918106    9948 command_runner.go:130] ! I0127 12:35:41.327573       1 remote_available_controller.go:411] Starting RemoteAvailability controller
	I0127 12:36:57.918180    9948 command_runner.go:130] ! I0127 12:35:41.328217       1 cache.go:32] Waiting for caches to sync for RemoteAvailability controller
	I0127 12:36:57.918180    9948 command_runner.go:130] ! I0127 12:35:41.328319       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I0127 12:36:57.918180    9948 command_runner.go:130] ! I0127 12:35:41.329060       1 cluster_authentication_trust_controller.go:462] Starting cluster_authentication_trust_controller controller
	I0127 12:36:57.918180    9948 command_runner.go:130] ! I0127 12:35:41.329095       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0127 12:36:57.918180    9948 command_runner.go:130] ! I0127 12:35:41.329225       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0127 12:36:57.918180    9948 command_runner.go:130] ! I0127 12:35:41.329996       1 controller.go:78] Starting OpenAPI AggregationController
	I0127 12:36:57.918264    9948 command_runner.go:130] ! I0127 12:35:41.330057       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0127 12:36:57.918264    9948 command_runner.go:130] ! I0127 12:35:41.330085       1 apf_controller.go:377] Starting API Priority and Fairness config controller
	I0127 12:36:57.918264    9948 command_runner.go:130] ! I0127 12:35:41.330333       1 apiservice_controller.go:100] Starting APIServiceRegistrationController
	I0127 12:36:57.918264    9948 command_runner.go:130] ! I0127 12:35:41.330379       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0127 12:36:57.918264    9948 command_runner.go:130] ! I0127 12:35:41.331391       1 customresource_discovery_controller.go:292] Starting DiscoveryController
	I0127 12:36:57.918387    9948 command_runner.go:130] ! I0127 12:35:41.331485       1 dynamic_serving_content.go:135] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0127 12:36:57.918719    9948 command_runner.go:130] ! I0127 12:35:41.327929       1 local_available_controller.go:156] Starting LocalAvailability controller
	I0127 12:36:57.918719    9948 command_runner.go:130] ! I0127 12:35:41.333671       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I0127 12:36:57.918719    9948 command_runner.go:130] ! I0127 12:35:41.333703       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:57.918719    9948 command_runner.go:130] ! I0127 12:35:41.333958       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 12:36:57.918799    9948 command_runner.go:130] ! I0127 12:35:41.335863       1 controller.go:142] Starting OpenAPI controller
	I0127 12:36:57.918799    9948 command_runner.go:130] ! I0127 12:35:41.336704       1 controller.go:90] Starting OpenAPI V3 controller
	I0127 12:36:57.918870    9948 command_runner.go:130] ! I0127 12:35:41.336831       1 naming_controller.go:294] Starting NamingConditionController
	I0127 12:36:57.918870    9948 command_runner.go:130] ! I0127 12:35:41.337057       1 establishing_controller.go:81] Starting EstablishingController
	I0127 12:36:57.918870    9948 command_runner.go:130] ! I0127 12:35:41.337215       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0127 12:36:57.918870    9948 command_runner.go:130] ! I0127 12:35:41.337324       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0127 12:36:57.918870    9948 command_runner.go:130] ! I0127 12:35:41.337408       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0127 12:36:57.918939    9948 command_runner.go:130] ! I0127 12:35:41.327968       1 aggregator.go:169] waiting for initial CRD sync...
	I0127 12:36:57.918939    9948 command_runner.go:130] ! I0127 12:35:41.387084       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0127 12:36:57.918939    9948 command_runner.go:130] ! I0127 12:35:41.387441       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0127 12:36:57.919011    9948 command_runner.go:130] ! I0127 12:35:41.450926       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0127 12:36:57.919011    9948 command_runner.go:130] ! I0127 12:35:41.451366       1 policy_source.go:240] refreshing policies
	I0127 12:36:57.919011    9948 command_runner.go:130] ! I0127 12:35:41.488750       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0127 12:36:57.919011    9948 command_runner.go:130] ! I0127 12:35:41.488990       1 aggregator.go:171] initial CRD sync complete...
	I0127 12:36:57.919011    9948 command_runner.go:130] ! I0127 12:35:41.489245       1 autoregister_controller.go:144] Starting autoregister controller
	I0127 12:36:57.919121    9948 command_runner.go:130] ! I0127 12:35:41.489480       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0127 12:36:57.919121    9948 command_runner.go:130] ! I0127 12:35:41.489653       1 cache.go:39] Caches are synced for autoregister controller
	I0127 12:36:57.919121    9948 command_runner.go:130] ! I0127 12:35:41.499151       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0127 12:36:57.919121    9948 command_runner.go:130] ! I0127 12:35:41.527390       1 shared_informer.go:320] Caches are synced for configmaps
	I0127 12:36:57.919200    9948 command_runner.go:130] ! I0127 12:35:41.528625       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0127 12:36:57.919200    9948 command_runner.go:130] ! I0127 12:35:41.529892       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0127 12:36:57.919200    9948 command_runner.go:130] ! I0127 12:35:41.530639       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0127 12:36:57.919200    9948 command_runner.go:130] ! I0127 12:35:41.531604       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0127 12:36:57.919200    9948 command_runner.go:130] ! I0127 12:35:41.531638       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0127 12:36:57.919271    9948 command_runner.go:130] ! I0127 12:35:41.534721       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0127 12:36:57.919271    9948 command_runner.go:130] ! I0127 12:35:41.540933       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0127 12:36:57.919271    9948 command_runner.go:130] ! I0127 12:35:41.545944       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0127 12:36:57.919271    9948 command_runner.go:130] ! I0127 12:35:42.357869       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0127 12:36:57.919341    9948 command_runner.go:130] ! I0127 12:35:42.374307       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0127 12:36:57.919341    9948 command_runner.go:130] ! W0127 12:35:43.074223       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.29.198.106]
	I0127 12:36:57.919341    9948 command_runner.go:130] ! I0127 12:35:43.075938       1 controller.go:615] quota admission added evaluator for: endpoints
	I0127 12:36:57.919341    9948 command_runner.go:130] ! I0127 12:35:43.085006       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0127 12:36:57.919341    9948 command_runner.go:130] ! I0127 12:35:44.603084       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0127 12:36:57.919431    9948 command_runner.go:130] ! I0127 12:35:44.989601       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0127 12:36:57.919431    9948 command_runner.go:130] ! I0127 12:35:45.141450       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0127 12:36:57.919431    9948 command_runner.go:130] ! I0127 12:35:45.327075       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 12:36:57.919431    9948 command_runner.go:130] ! I0127 12:35:45.338333       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 12:36:57.926639    9948 logs.go:123] Gathering logs for kube-proxy [0283b35dee3c] ...
	I0127 12:36:57.926639    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0283b35dee3c"
	I0127 12:36:57.949131    9948 command_runner.go:130] ! I0127 12:35:44.449716       1 server_linux.go:66] "Using iptables proxy"
	I0127 12:36:57.949131    9948 command_runner.go:130] ! E0127 12:35:44.569403       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0127 12:36:57.950036    9948 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0127 12:36:57.950036    9948 command_runner.go:130] ! 	add table ip kube-proxy
	I0127 12:36:57.950036    9948 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:57.950036    9948 command_runner.go:130] !  >
	I0127 12:36:57.950036    9948 command_runner.go:130] ! E0127 12:35:44.599245       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0127 12:36:57.950036    9948 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0127 12:36:57.950036    9948 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0127 12:36:57.950036    9948 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:57.950184    9948 command_runner.go:130] !  >
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:44.767652       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.198.106"]
	I0127 12:36:57.950184    9948 command_runner.go:130] ! E0127 12:35:44.770299       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.038438       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.038556       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.038587       1 server_linux.go:170] "Using iptables Proxier"
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.043111       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.045042       1 server.go:497] "Version info" version="v1.32.1"
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.045375       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.053262       1 config.go:199] "Starting service config controller"
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.054808       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.054873       1 config.go:329] "Starting node config controller"
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.054880       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.058308       1 config.go:105] "Starting endpoint slice config controller"
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.058492       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.155116       1 shared_informer.go:320] Caches are synced for node config
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.155116       1 shared_informer.go:320] Caches are synced for service config
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.159566       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 12:36:57.953121    9948 logs.go:123] Gathering logs for dmesg ...
	I0127 12:36:57.953121    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:36:57.974705    9948 command_runner.go:130] > [Jan27 12:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0127 12:36:57.974808    9948 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0127 12:36:57.974808    9948 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0127 12:36:57.974808    9948 command_runner.go:130] > [  +0.124628] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0127 12:36:57.974895    9948 command_runner.go:130] > [  +0.022511] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0127 12:36:57.974922    9948 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.069272] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.020914] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0127 12:36:57.974952    9948 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0127 12:36:57.974952    9948 command_runner.go:130] > [Jan27 12:34] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.706235] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +1.791193] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +6.780102] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0127 12:36:57.974952    9948 command_runner.go:130] > [Jan27 12:35] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.194598] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	I0127 12:36:57.974952    9948 command_runner.go:130] > [ +25.881577] systemd-fstab-generator[1029]: Ignoring "noauto" option for root device
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.104839] kauditd_printk_skb: 75 callbacks suppressed
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.497850] systemd-fstab-generator[1069]: Ignoring "noauto" option for root device
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.189754] systemd-fstab-generator[1081]: Ignoring "noauto" option for root device
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.209865] systemd-fstab-generator[1095]: Ignoring "noauto" option for root device
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +2.995294] systemd-fstab-generator[1337]: Ignoring "noauto" option for root device
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.193187] systemd-fstab-generator[1349]: Ignoring "noauto" option for root device
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.167597] systemd-fstab-generator[1361]: Ignoring "noauto" option for root device
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.247752] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.858687] systemd-fstab-generator[1500]: Ignoring "noauto" option for root device
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.090112] kauditd_printk_skb: 206 callbacks suppressed
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +3.380441] systemd-fstab-generator[1641]: Ignoring "noauto" option for root device
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +1.786352] kauditd_printk_skb: 64 callbacks suppressed
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +5.236723] kauditd_printk_skb: 10 callbacks suppressed
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +4.105586] systemd-fstab-generator[2522]: Ignoring "noauto" option for root device
	I0127 12:36:57.974952    9948 command_runner.go:130] > [Jan27 12:36] kauditd_printk_skb: 70 callbacks suppressed
	I0127 12:36:57.977088    9948 logs.go:123] Gathering logs for coredns [b3a9ed6e130c] ...
	I0127 12:36:57.977088    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a9ed6e130c"
	I0127 12:36:58.003868    9948 command_runner.go:130] > .:53
	I0127 12:36:58.003868    9948 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 5e2e325279dfa828a8fd1b44d83ab4703abb0247d4beadde42157147650fe687c0862eaa4caa15a5d9139c48c9a9dd5ec3cd962ba60368e8ffb4d02ae4d29aeb
	I0127 12:36:58.003868    9948 command_runner.go:130] > CoreDNS-1.11.3
	I0127 12:36:58.003868    9948 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0127 12:36:58.003868    9948 command_runner.go:130] > [INFO] 127.0.0.1:47464 - 34099 "HINFO IN 5313391549706874198.1206200090770907475. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.062040871s
	I0127 12:36:58.004242    9948 logs.go:123] Gathering logs for coredns [f818dd15d8b0] ...
	I0127 12:36:58.004242    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f818dd15d8b0"
	I0127 12:36:58.032110    9948 command_runner.go:130] > .:53
	I0127 12:36:58.032110    9948 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 5e2e325279dfa828a8fd1b44d83ab4703abb0247d4beadde42157147650fe687c0862eaa4caa15a5d9139c48c9a9dd5ec3cd962ba60368e8ffb4d02ae4d29aeb
	I0127 12:36:58.032110    9948 command_runner.go:130] > CoreDNS-1.11.3
	I0127 12:36:58.032110    9948 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0127 12:36:58.032110    9948 command_runner.go:130] > [INFO] 127.0.0.1:50782 - 35950 "HINFO IN 8787717511470146079.8254135695837817311. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.151481959s
	I0127 12:36:58.032110    9948 command_runner.go:130] > [INFO] 10.244.0.3:56186 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000430505s
	I0127 12:36:58.032110    9948 command_runner.go:130] > [INFO] 10.244.0.3:58756 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.126738988s
	I0127 12:36:58.032110    9948 command_runner.go:130] > [INFO] 10.244.0.3:36399 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.053330342s
	I0127 12:36:58.032110    9948 command_runner.go:130] > [INFO] 10.244.0.3:35359 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.140941591s
	I0127 12:36:58.032450    9948 command_runner.go:130] > [INFO] 10.244.1.2:41150 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000220803s
	I0127 12:36:58.032450    9948 command_runner.go:130] > [INFO] 10.244.1.2:57591 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0000709s
	I0127 12:36:58.032450    9948 command_runner.go:130] > [INFO] 10.244.1.2:45132 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000133002s
	I0127 12:36:58.032450    9948 command_runner.go:130] > [INFO] 10.244.1.2:48593 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000728s
	I0127 12:36:58.032565    9948 command_runner.go:130] > [INFO] 10.244.0.3:53274 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000261802s
	I0127 12:36:58.032597    9948 command_runner.go:130] > [INFO] 10.244.0.3:57676 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.069110701s
	I0127 12:36:58.032597    9948 command_runner.go:130] > [INFO] 10.244.0.3:59948 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000178302s
	I0127 12:36:58.032597    9948 command_runner.go:130] > [INFO] 10.244.0.3:39801 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000198802s
	I0127 12:36:58.032658    9948 command_runner.go:130] > [INFO] 10.244.0.3:45673 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.023238636s
	I0127 12:36:58.032714    9948 command_runner.go:130] > [INFO] 10.244.0.3:42840 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154002s
	I0127 12:36:58.032714    9948 command_runner.go:130] > [INFO] 10.244.0.3:43505 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000181002s
	I0127 12:36:58.032714    9948 command_runner.go:130] > [INFO] 10.244.0.3:34935 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092101s
	I0127 12:36:58.032714    9948 command_runner.go:130] > [INFO] 10.244.1.2:54822 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155102s
	I0127 12:36:58.032826    9948 command_runner.go:130] > [INFO] 10.244.1.2:50877 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000188102s
	I0127 12:36:58.032826    9948 command_runner.go:130] > [INFO] 10.244.1.2:45384 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183802s
	I0127 12:36:58.032826    9948 command_runner.go:130] > [INFO] 10.244.1.2:35073 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000227202s
	I0127 12:36:58.032826    9948 command_runner.go:130] > [INFO] 10.244.1.2:50517 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000061101s
	I0127 12:36:58.032898    9948 command_runner.go:130] > [INFO] 10.244.1.2:37353 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130501s
	I0127 12:36:58.032936    9948 command_runner.go:130] > [INFO] 10.244.1.2:42117 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114301s
	I0127 12:36:58.032936    9948 command_runner.go:130] > [INFO] 10.244.1.2:46171 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060401s
	I0127 12:36:58.032936    9948 command_runner.go:130] > [INFO] 10.244.0.3:55282 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117601s
	I0127 12:36:58.032936    9948 command_runner.go:130] > [INFO] 10.244.0.3:41761 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162301s
	I0127 12:36:58.032936    9948 command_runner.go:130] > [INFO] 10.244.0.3:35358 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000218902s
	I0127 12:36:58.033012    9948 command_runner.go:130] > [INFO] 10.244.0.3:50342 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124402s
	I0127 12:36:58.033012    9948 command_runner.go:130] > [INFO] 10.244.1.2:38159 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159602s
	I0127 12:36:58.033012    9948 command_runner.go:130] > [INFO] 10.244.1.2:37043 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000171002s
	I0127 12:36:58.033012    9948 command_runner.go:130] > [INFO] 10.244.1.2:50762 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000168301s
	I0127 12:36:58.033068    9948 command_runner.go:130] > [INFO] 10.244.1.2:33014 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000603s
	I0127 12:36:58.033089    9948 command_runner.go:130] > [INFO] 10.244.0.3:34941 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134301s
	I0127 12:36:58.033119    9948 command_runner.go:130] > [INFO] 10.244.0.3:60117 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000393904s
	I0127 12:36:58.033119    9948 command_runner.go:130] > [INFO] 10.244.0.3:47506 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000214402s
	I0127 12:36:58.033119    9948 command_runner.go:130] > [INFO] 10.244.0.3:42968 - 5 "PTR IN 1.192.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000443604s
	I0127 12:36:58.033119    9948 command_runner.go:130] > [INFO] 10.244.1.2:52260 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193802s
	I0127 12:36:58.033183    9948 command_runner.go:130] > [INFO] 10.244.1.2:40492 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000310903s
	I0127 12:36:58.033183    9948 command_runner.go:130] > [INFO] 10.244.1.2:50341 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074s
	I0127 12:36:58.033183    9948 command_runner.go:130] > [INFO] 10.244.1.2:41676 - 5 "PTR IN 1.192.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000637s
	I0127 12:36:58.033183    9948 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0127 12:36:58.033281    9948 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0127 12:36:58.035778    9948 logs.go:123] Gathering logs for kube-controller-manager [8d4872cda28d] ...
	I0127 12:36:58.035844    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4872cda28d"
	I0127 12:36:58.072970    9948 command_runner.go:130] ! I0127 12:35:39.384985       1 serving.go:386] Generated self-signed cert in-memory
	I0127 12:36:58.072970    9948 command_runner.go:130] ! I0127 12:35:39.805936       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0127 12:36:58.072970    9948 command_runner.go:130] ! I0127 12:35:39.811206       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:58.072970    9948 command_runner.go:130] ! I0127 12:35:39.817632       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0127 12:36:58.072970    9948 command_runner.go:130] ! I0127 12:35:39.822579       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:58.072970    9948 command_runner.go:130] ! I0127 12:35:39.822772       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 12:36:58.072970    9948 command_runner.go:130] ! I0127 12:35:39.823033       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:58.072970    9948 command_runner.go:130] ! I0127 12:35:43.406116       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0127 12:36:58.072970    9948 command_runner.go:130] ! I0127 12:35:43.407249       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0127 12:36:58.072970    9948 command_runner.go:130] ! I0127 12:35:43.417237       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0127 12:36:58.072970    9948 command_runner.go:130] ! I0127 12:35:43.417292       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0127 12:36:58.072970    9948 command_runner.go:130] ! I0127 12:35:43.417300       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0127 12:36:58.073926    9948 command_runner.go:130] ! I0127 12:35:43.417307       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0127 12:36:58.073926    9948 command_runner.go:130] ! I0127 12:35:43.417506       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0127 12:36:58.073926    9948 command_runner.go:130] ! I0127 12:35:43.417534       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0127 12:36:58.073926    9948 command_runner.go:130] ! I0127 12:35:43.417553       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0127 12:36:58.073926    9948 command_runner.go:130] ! I0127 12:35:43.431621       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0127 12:36:58.073926    9948 command_runner.go:130] ! I0127 12:35:43.431964       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0127 12:36:58.073926    9948 command_runner.go:130] ! I0127 12:35:43.431989       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0127 12:36:58.073926    9948 command_runner.go:130] ! I0127 12:35:43.432010       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0127 12:36:58.073926    9948 command_runner.go:130] ! I0127 12:35:43.442961       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0127 12:36:58.073926    9948 command_runner.go:130] ! I0127 12:35:43.447308       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0127 12:36:58.073926    9948 command_runner.go:130] ! I0127 12:35:43.447396       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0127 12:36:58.073926    9948 command_runner.go:130] ! I0127 12:35:43.449412       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0127 12:36:58.073926    9948 command_runner.go:130] ! I0127 12:35:43.449608       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.466583       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.467490       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.467508       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.491988       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.493672       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.493698       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.498557       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.503953       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.503976       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.505729       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.505861       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.505872       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.509718       1 shared_informer.go:320] Caches are synced for tokens
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.510192       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.510208       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.510698       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.510714       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.512896       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.513433       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.513448       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.516433       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.516659       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.516671       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.524334       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.524358       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.524545       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.524557       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.534871       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.535028       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.535038       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.557745       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.557975       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.612615       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.612890       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.612906       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.616333       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.627087       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.627107       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.692864       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.692892       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.693095       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.700796       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.703832       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.703867       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.713912       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.714114       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.714094       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.714712       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.714721       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.721904       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.722372       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.723076       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.739709       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.739886       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.739897       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.748074       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.748419       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.748432       1 shared_informer.go:313] Waiting for caches to sync for job
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.774085       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.774108       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.774196       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.814844       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.815383       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.815410       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! W0127 12:35:43.815432       1 shared_informer.go:597] resyncPeriod 17h46m45.188948257s is smaller than resyncCheckPeriod 20h1m58.14772951s and the informer has already started. Changing it to 20h1m58.14772951s
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.815487       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.815503       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.816077       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.816613       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.817053       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.817252       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.817373       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.817397       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! W0127 12:35:43.818105       1 shared_informer.go:597] resyncPeriod 12h27m56.377400464s is smaller than resyncCheckPeriod 20h1m58.14772951s and the informer has already started. Changing it to 20h1m58.14772951s
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.818223       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.818270       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.818295       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.818319       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.818336       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.818363       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.818376       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.818392       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.818410       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.818442       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.818764       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.818778       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.819843       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.841955       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.842559       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.842587       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.842995       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.852026       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.852211       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.852253       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.922876       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.923019       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.923033       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.962858       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.962895       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.963021       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.963037       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.014798       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.016438       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.016458       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.066881       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.067018       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.067064       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0127 12:36:58.076924    9948 command_runner.go:130] ! W0127 12:35:44.227808       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.236233       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.236429       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.236541       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.236556       1 shared_informer.go:313] Waiting for caches to sync for node
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.261051       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.261341       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.261374       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.314220       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.314319       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.314352       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.364392       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.364625       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.365833       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.365937       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.365975       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.365977       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.367697       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.368067       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.368427       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.369763       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.370290       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.370408       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.370568       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.412258       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.412274       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.412282       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.412297       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.412368       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.412379       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.517568       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.517771       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.518074       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.518288       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.564449       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.564546       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.564657       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.591265       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.663628       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.727283       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.739370       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000\" does not exist"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.739797       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m02\" does not exist"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.740184       1 shared_informer.go:320] Caches are synced for crt configmap
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.740835       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m03\" does not exist"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.747985       1 shared_informer.go:320] Caches are synced for GC
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.748593       1 shared_informer.go:320] Caches are synced for job
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.765439       1 shared_informer.go:320] Caches are synced for cronjob
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.765669       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.765982       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.766264       1 shared_informer.go:320] Caches are synced for expand
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.766617       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.767305       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.767462       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.768217       1 shared_informer.go:320] Caches are synced for stateful set
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.766681       1 shared_informer.go:320] Caches are synced for daemon sets
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.774887       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.775167       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.775269       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.775418       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.778028       1 shared_informer.go:320] Caches are synced for HPA
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.793610       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.793916       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.798773       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.805302       1 shared_informer.go:320] Caches are synced for PVC protection
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.805404       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.806234       1 shared_informer.go:320] Caches are synced for PV protection
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.811621       1 shared_informer.go:320] Caches are synced for ephemeral
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.813099       1 shared_informer.go:320] Caches are synced for TTL
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.813420       1 shared_informer.go:320] Caches are synced for namespace
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.813655       1 shared_informer.go:320] Caches are synced for deployment
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.815238       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.819201       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.819433       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.820006       1 shared_informer.go:320] Caches are synced for disruption
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.821695       1 shared_informer.go:320] Caches are synced for taint
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.821905       1 shared_informer.go:320] Caches are synced for endpoint
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.824479       1 shared_informer.go:320] Caches are synced for persistent volume
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.824852       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.825228       1 shared_informer.go:320] Caches are synced for attach detach
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.825784       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.836209       1 shared_informer.go:320] Caches are synced for service account
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.836651       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.836969       1 shared_informer.go:320] Caches are synced for node
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.838015       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.838049       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.838058       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.838065       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.838200       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.838217       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.838227       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.844908       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.845551       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.845777       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.898551       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.899476       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.900201       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.900496       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000-m02"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.900687       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000-m03"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.901405       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.984858       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:45.000632       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="180.930208ms"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:45.003909       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="39.2µs"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:45.016382       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="195.414857ms"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:45.016698       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="108.2µs"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:54.975850       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:36:32.834093       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:36:32.834425       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:36:32.855708       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:36:34.928482       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:36:34.940809       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:36:34.955742       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:36:35.025877       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="15.32946ms"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:36:35.026020       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="30.3µs"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:36:40.041357       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:36:47.580904       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="50.8µs"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:36:48.616631       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="19.328909ms"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:36:48.617909       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="35.8µs"
	I0127 12:36:58.079897    9948 command_runner.go:130] ! I0127 12:36:48.650691       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="23.414753ms"
	I0127 12:36:58.079897    9948 command_runner.go:130] ! I0127 12:36:48.651163       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="28.701µs"
	I0127 12:36:58.095932    9948 logs.go:123] Gathering logs for kindnet [d758000dda95] ...
	I0127 12:36:58.095932    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d758000dda95"
	I0127 12:36:58.121914    9948 command_runner.go:130] ! I0127 12:22:14.854106       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:14.855096       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:14.855184       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:24.859265       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:24.859464       1 main.go:301] handling current node
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:24.859638       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:24.859681       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:24.860150       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:24.860242       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:34.860201       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:34.860282       1 main.go:301] handling current node
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:34.860531       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:34.860551       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:34.861114       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:34.861204       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:44.853677       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:44.853737       1 main.go:301] handling current node
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:44.853761       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:44.853838       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:22:44.855661       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:22:44.855749       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:22:54.856510       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:22:54.856632       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:22:54.857002       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:22:54.857030       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:22:54.857252       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:22:54.857371       1 main.go:301] handling current node
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:04.859476       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:04.859579       1 main.go:301] handling current node
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:04.859615       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:04.859623       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:04.859972       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:04.859987       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:14.853396       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:14.853515       1 main.go:301] handling current node
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:14.853537       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:14.853546       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:14.853802       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:14.853843       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:24.853600       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:24.853883       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:24.854392       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:24.854484       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:24.854688       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:24.854773       1 main.go:301] handling current node
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:34.853542       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:34.853600       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:34.854132       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:34.854286       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:34.854787       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:34.854920       1 main.go:301] handling current node
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:44.856707       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:44.856833       1 main.go:301] handling current node
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:44.856869       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:44.856877       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:44.857371       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:44.857460       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:54.853590       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:54.853737       1 main.go:301] handling current node
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:54.853759       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:54.853768       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:54.854333       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:54.854403       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:04.862983       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:04.863248       1 main.go:301] handling current node
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:04.863599       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:04.863808       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:04.864418       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:04.864558       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:14.854114       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:14.854152       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:14.854412       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:14.854490       1 main.go:301] handling current node
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:14.854619       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:14.854711       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:24.857372       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:24.857503       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:24.857861       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:24.857991       1 main.go:301] handling current node
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:24.858058       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:24.858126       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:34.854371       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:34.854425       1 main.go:301] handling current node
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:34.854444       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:34.854451       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:34.855276       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:24:34.855359       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:24:44.862967       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:24:44.863069       1 main.go:301] handling current node
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:24:44.863118       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:24:44.863132       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:24:44.863438       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:24:44.863559       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:24:54.856232       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:24:54.856343       1 main.go:301] handling current node
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:24:54.856417       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:24:54.856429       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:24:54.857056       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:24:54.857188       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:04.853438       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:04.853551       1 main.go:301] handling current node
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:04.853573       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:04.853581       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:04.853903       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:04.853979       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:14.854463       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:14.854571       1 main.go:301] handling current node
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:14.854614       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:14.854630       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:14.855124       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:14.855157       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:24.853742       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:24.853838       1 main.go:301] handling current node
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:24.853859       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:24.853866       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:24.854822       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:24.854982       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:34.853374       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:34.853516       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:34.853756       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:34.853919       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:34.854285       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:34.854360       1 main.go:301] handling current node
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:44.855075       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:44.855182       1 main.go:301] handling current node
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:44.855201       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:44.855209       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:44.856108       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:44.856191       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:54.854358       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:54.854550       1 main.go:301] handling current node
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:54.854584       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:54.854606       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:54.854829       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:54.854893       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:04.853425       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:04.853480       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:04.854150       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:04.854221       1 main.go:301] handling current node
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:04.854322       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:04.854350       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:14.853895       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:14.854577       1 main.go:301] handling current node
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:14.854615       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:14.854639       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:14.856224       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:14.856319       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:24.858046       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:24.858200       1 main.go:301] handling current node
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:24.858527       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:24.858599       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:24.859022       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:24.859118       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:34.853783       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:34.853853       1 main.go:301] handling current node
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:34.853871       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:34.853878       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:34.854193       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:34.854260       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:44.856492       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:44.856552       1 main.go:301] handling current node
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:44.856569       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:44.856575       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:44.857163       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:44.857246       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:54.858285       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:54.858431       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:54.859101       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:54.859322       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:54.859474       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:26:54.859544       1 main.go:301] handling current node
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:04.858831       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:04.858967       1 main.go:301] handling current node
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:04.859484       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:04.859592       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:04.860213       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:04.860314       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:14.854313       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:14.854366       1 main.go:301] handling current node
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:14.854386       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:14.854394       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:14.854883       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:14.855322       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:24.859182       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:24.859342       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:24.859757       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:24.859824       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:24.860078       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:24.860255       1 main.go:301] handling current node
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:34.854206       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:34.854462       1 main.go:301] handling current node
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:34.854567       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:34.854657       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:34.855188       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:34.855233       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:44.861342       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:44.861572       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:44.862224       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:44.862399       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:44.862648       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:44.862687       1 main.go:301] handling current node
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:54.853605       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:54.853658       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:54.853924       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:54.854125       1 main.go:301] handling current node
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:54.854203       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:54.854216       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:04.859858       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:04.859922       1 main.go:301] handling current node
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:04.859984       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:04.860038       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:04.860336       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:04.860450       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:14.853470       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:14.853607       1 main.go:301] handling current node
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:14.853627       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:14.853634       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:14.854800       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:14.854899       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:24.853786       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:24.853841       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:24.854051       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:24.854078       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:24.854192       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:24.854297       1 main.go:301] handling current node
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:34.853571       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:34.853730       1 main.go:301] handling current node
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:34.853756       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:34.853765       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:34.853988       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:34.854180       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:44.853630       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:44.854161       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:44.854753       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:44.854886       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:44.855270       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:44.855393       1 main.go:301] handling current node
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:54.856731       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:54.856780       1 main.go:301] handling current node
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:54.856800       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:54.856807       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:54.857466       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:28:54.857531       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:04.853996       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:04.854093       1 main.go:301] handling current node
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:04.854113       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:04.854120       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:04.854865       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:04.855000       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:14.853874       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:14.854279       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:14.854677       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:14.854896       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:14.855469       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:14.856845       1 main.go:301] handling current node
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:24.853660       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:24.853766       1 main.go:301] handling current node
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:24.853786       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:24.853793       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:24.854261       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:24.854541       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:34.861616       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:34.861807       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:34.862166       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:34.862228       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:34.862400       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:34.862455       1 main.go:301] handling current node
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:44.854294       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:44.854418       1 main.go:301] handling current node
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:44.854439       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:44.854448       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:44.854699       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:44.854776       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:54.853707       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:54.853780       1 main.go:301] handling current node
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:54.853914       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:54.854022       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:54.854423       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:54.854566       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:04.853625       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:04.853820       1 main.go:301] handling current node
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:04.854002       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:04.854301       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:04.854878       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:04.854986       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:14.853537       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:14.853729       1 main.go:301] handling current node
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:14.853749       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:14.853756       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:14.855013       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:14.855147       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:24.853563       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:24.853757       1 main.go:301] handling current node
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:24.853779       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:24.853786       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:24.854220       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:24.854327       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:34.858899       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:34.859124       1 main.go:301] handling current node
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:34.859146       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:34.859676       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:34.860572       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:34.860819       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:44.858769       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:44.858890       1 main.go:301] handling current node
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:44.858912       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:44.858920       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:44.859720       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:44.859809       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:54.855090       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:54.855134       1 main.go:301] handling current node
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:54.855151       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:54.855157       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:54.855561       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:54.855573       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:31:04.854121       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:31:04.854237       1 main.go:301] handling current node
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:31:04.854256       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:31:04.854263       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:31:04.854424       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:31:04.854452       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:31:04.854544       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.29.206.88 Flags: [] Table: 0 Realm: 0} 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:31:14.853651       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:31:14.853750       1 main.go:301] handling current node
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:14.853771       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:14.853778       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:14.854005       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:14.854084       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:24.854114       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:24.854161       1 main.go:301] handling current node
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:24.854212       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:24.854223       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:24.854591       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:24.854666       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:34.862705       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:34.862793       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:34.863105       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:34.863140       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:34.863334       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:34.863362       1 main.go:301] handling current node
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:44.855275       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:44.855421       1 main.go:301] handling current node
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:44.855462       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:44.855496       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:44.856579       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:44.856690       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:54.856288       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:54.856579       1 main.go:301] handling current node
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:54.856914       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:54.857065       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:54.857508       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:54.857553       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:04.853556       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:04.853630       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:04.854583       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:04.854615       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:04.857114       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:04.857217       1 main.go:301] handling current node
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:14.854183       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:14.854348       1 main.go:301] handling current node
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:14.854376       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:14.854402       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:14.854890       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:14.854992       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:24.853770       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:24.854222       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:24.854498       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:24.854573       1 main.go:301] handling current node
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:24.854606       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:24.854613       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:34.853556       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:34.853715       1 main.go:301] handling current node
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:34.853749       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:34.853879       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:34.854386       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:34.854469       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:44.853378       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:44.853424       1 main.go:301] handling current node
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:44.853441       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:44.853447       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:44.853735       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:44.853765       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:54.859317       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:54.859396       1 main.go:301] handling current node
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:54.859415       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:54.859421       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:54.859756       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:54.859853       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.127909    9948 command_runner.go:130] ! I0127 12:33:04.861975       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.127909    9948 command_runner.go:130] ! I0127 12:33:04.862085       1 main.go:301] handling current node
	I0127 12:36:58.127909    9948 command_runner.go:130] ! I0127 12:33:04.862106       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.127909    9948 command_runner.go:130] ! I0127 12:33:04.862113       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.127909    9948 command_runner.go:130] ! I0127 12:33:04.862780       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.127909    9948 command_runner.go:130] ! I0127 12:33:04.862861       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.127909    9948 command_runner.go:130] ! I0127 12:33:14.853823       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.127909    9948 command_runner.go:130] ! I0127 12:33:14.853859       1 main.go:301] handling current node
	I0127 12:36:58.127909    9948 command_runner.go:130] ! I0127 12:33:14.853877       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.127909    9948 command_runner.go:130] ! I0127 12:33:14.853884       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.127909    9948 command_runner.go:130] ! I0127 12:33:14.854153       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.127909    9948 command_runner.go:130] ! I0127 12:33:14.854165       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.143920    9948 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:36:58.143920    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 12:36:58.334316    9948 command_runner.go:130] > Name:               multinode-659000
	I0127 12:36:58.334408    9948 command_runner.go:130] > Roles:              control-plane
	I0127 12:36:58.334408    9948 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0127 12:36:58.334483    9948 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0127 12:36:58.334483    9948 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0127 12:36:58.334483    9948 command_runner.go:130] >                     kubernetes.io/hostname=multinode-659000
	I0127 12:36:58.334483    9948 command_runner.go:130] >                     kubernetes.io/os=linux
	I0127 12:36:58.334483    9948 command_runner.go:130] >                     minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	I0127 12:36:58.334483    9948 command_runner.go:130] >                     minikube.k8s.io/name=multinode-659000
	I0127 12:36:58.334541    9948 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0127 12:36:58.334541    9948 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_01_27T12_12_00_0700
	I0127 12:36:58.334541    9948 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0127 12:36:58.334541    9948 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0127 12:36:58.334593    9948 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0127 12:36:58.334616    9948 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0127 12:36:58.334616    9948 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0127 12:36:58.334616    9948 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0127 12:36:58.334616    9948 command_runner.go:130] > CreationTimestamp:  Mon, 27 Jan 2025 12:11:55 +0000
	I0127 12:36:58.334687    9948 command_runner.go:130] > Taints:             <none>
	I0127 12:36:58.334687    9948 command_runner.go:130] > Unschedulable:      false
	I0127 12:36:58.334687    9948 command_runner.go:130] > Lease:
	I0127 12:36:58.334687    9948 command_runner.go:130] >   HolderIdentity:  multinode-659000
	I0127 12:36:58.334687    9948 command_runner.go:130] >   AcquireTime:     <unset>
	I0127 12:36:58.334687    9948 command_runner.go:130] >   RenewTime:       Mon, 27 Jan 2025 12:36:52 +0000
	I0127 12:36:58.334687    9948 command_runner.go:130] > Conditions:
	I0127 12:36:58.334687    9948 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0127 12:36:58.334779    9948 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0127 12:36:58.334779    9948 command_runner.go:130] >   MemoryPressure   False   Mon, 27 Jan 2025 12:36:32 +0000   Mon, 27 Jan 2025 12:11:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0127 12:36:58.334859    9948 command_runner.go:130] >   DiskPressure     False   Mon, 27 Jan 2025 12:36:32 +0000   Mon, 27 Jan 2025 12:11:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0127 12:36:58.334859    9948 command_runner.go:130] >   PIDPressure      False   Mon, 27 Jan 2025 12:36:32 +0000   Mon, 27 Jan 2025 12:11:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0127 12:36:58.334887    9948 command_runner.go:130] >   Ready            True    Mon, 27 Jan 2025 12:36:32 +0000   Mon, 27 Jan 2025 12:36:32 +0000   KubeletReady                 kubelet is posting ready status
	I0127 12:36:58.334934    9948 command_runner.go:130] > Addresses:
	I0127 12:36:58.334956    9948 command_runner.go:130] >   InternalIP:  172.29.198.106
	I0127 12:36:58.334956    9948 command_runner.go:130] >   Hostname:    multinode-659000
	I0127 12:36:58.334982    9948 command_runner.go:130] > Capacity:
	I0127 12:36:58.334982    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:58.334982    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:58.335025    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:58.335025    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:58.335025    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:58.335061    9948 command_runner.go:130] > Allocatable:
	I0127 12:36:58.335061    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:58.335061    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:58.335061    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:58.335061    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:58.335061    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:58.335061    9948 command_runner.go:130] > System Info:
	I0127 12:36:58.335061    9948 command_runner.go:130] >   Machine ID:                 312902fc96b948148d51eecf097c4a9d
	I0127 12:36:58.335061    9948 command_runner.go:130] >   System UUID:                be6234aa-9e29-bb41-8165-59b265a4d7d0
	I0127 12:36:58.335061    9948 command_runner.go:130] >   Boot ID:                    058425a5-0652-4c5c-a517-2369b8cac13d
	I0127 12:36:58.335061    9948 command_runner.go:130] >   Kernel Version:             5.10.207
	I0127 12:36:58.335061    9948 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0127 12:36:58.335061    9948 command_runner.go:130] >   Operating System:           linux
	I0127 12:36:58.335209    9948 command_runner.go:130] >   Architecture:               amd64
	I0127 12:36:58.335209    9948 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0127 12:36:58.335209    9948 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0127 12:36:58.335209    9948 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0127 12:36:58.335252    9948 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0127 12:36:58.335252    9948 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0127 12:36:58.335306    9948 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0127 12:36:58.335306    9948 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0127 12:36:58.335306    9948 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0127 12:36:58.335306    9948 command_runner.go:130] >   default                     busybox-58667487b6-2jq9j                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0127 12:36:58.335374    9948 command_runner.go:130] >   kube-system                 coredns-668d6bf9bc-2qw6w                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     24m
	I0127 12:36:58.335374    9948 command_runner.go:130] >   kube-system                 etcd-multinode-659000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         76s
	I0127 12:36:58.335374    9948 command_runner.go:130] >   kube-system                 kindnet-z2hqq                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	I0127 12:36:58.335435    9948 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-659000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         76s
	I0127 12:36:58.335460    9948 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-659000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0127 12:36:58.335517    9948 command_runner.go:130] >   kube-system                 kube-proxy-s46mv                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0127 12:36:58.335559    9948 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-659000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         25m
	I0127 12:36:58.335580    9948 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0127 12:36:58.335580    9948 command_runner.go:130] > Allocated resources:
	I0127 12:36:58.335580    9948 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0127 12:36:58.335580    9948 command_runner.go:130] >   Resource           Requests     Limits
	I0127 12:36:58.335580    9948 command_runner.go:130] >   --------           --------     ------
	I0127 12:36:58.335635    9948 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0127 12:36:58.335657    9948 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0127 12:36:58.335657    9948 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0127 12:36:58.335680    9948 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0127 12:36:58.335680    9948 command_runner.go:130] > Events:
	I0127 12:36:58.335704    9948 command_runner.go:130] >   Type     Reason                   Age                From             Message
	I0127 12:36:58.335733    9948 command_runner.go:130] >   ----     ------                   ----               ----             -------
	I0127 12:36:58.335733    9948 command_runner.go:130] >   Normal   Starting                 24m                kube-proxy       
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   Starting                 73s                kube-proxy       
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   Starting                 25m                kubelet          Starting kubelet.
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   NodeHasSufficientMemory  25m (x8 over 25m)  kubelet          Node multinode-659000 status is now: NodeHasSufficientMemory
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    25m (x8 over 25m)  kubelet          Node multinode-659000 status is now: NodeHasNoDiskPressure
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   NodeHasSufficientPID     25m (x7 over 25m)  kubelet          Node multinode-659000 status is now: NodeHasSufficientPID
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    24m                kubelet          Node multinode-659000 status is now: NodeHasNoDiskPressure
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   NodeHasSufficientMemory  24m                kubelet          Node multinode-659000 status is now: NodeHasSufficientMemory
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   NodeHasSufficientPID     24m                kubelet          Node multinode-659000 status is now: NodeHasSufficientPID
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   Starting                 24m                kubelet          Starting kubelet.
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   RegisteredNode           24m                node-controller  Node multinode-659000 event: Registered Node multinode-659000 in Controller
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   NodeReady                24m                kubelet          Node multinode-659000 status is now: NodeReady
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   Starting                 82s                kubelet          Starting kubelet.
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   NodeHasSufficientMemory  82s (x8 over 82s)  kubelet          Node multinode-659000 status is now: NodeHasSufficientMemory
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    82s (x8 over 82s)  kubelet          Node multinode-659000 status is now: NodeHasNoDiskPressure
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   NodeHasSufficientPID     82s (x7 over 82s)  kubelet          Node multinode-659000 status is now: NodeHasSufficientPID
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Warning  Rebooted                 77s                kubelet          Node multinode-659000 has been rebooted, boot id: 058425a5-0652-4c5c-a517-2369b8cac13d
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   RegisteredNode           74s                node-controller  Node multinode-659000 event: Registered Node multinode-659000 in Controller
	I0127 12:36:58.335759    9948 command_runner.go:130] > Name:               multinode-659000-m02
	I0127 12:36:58.335759    9948 command_runner.go:130] > Roles:              <none>
	I0127 12:36:58.335759    9948 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0127 12:36:58.335759    9948 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0127 12:36:58.335759    9948 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0127 12:36:58.335759    9948 command_runner.go:130] >                     kubernetes.io/hostname=multinode-659000-m02
	I0127 12:36:58.335759    9948 command_runner.go:130] >                     kubernetes.io/os=linux
	I0127 12:36:58.335759    9948 command_runner.go:130] >                     minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	I0127 12:36:58.335759    9948 command_runner.go:130] >                     minikube.k8s.io/name=multinode-659000
	I0127 12:36:58.335759    9948 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0127 12:36:58.336287    9948 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_01_27T12_15_08_0700
	I0127 12:36:58.336347    9948 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0127 12:36:58.336347    9948 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0127 12:36:58.336347    9948 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0127 12:36:58.336507    9948 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0127 12:36:58.336507    9948 command_runner.go:130] > CreationTimestamp:  Mon, 27 Jan 2025 12:15:07 +0000
	I0127 12:36:58.336507    9948 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0127 12:36:58.336507    9948 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0127 12:36:58.336507    9948 command_runner.go:130] > Unschedulable:      false
	I0127 12:36:58.336507    9948 command_runner.go:130] > Lease:
	I0127 12:36:58.336507    9948 command_runner.go:130] >   HolderIdentity:  multinode-659000-m02
	I0127 12:36:58.336507    9948 command_runner.go:130] >   AcquireTime:     <unset>
	I0127 12:36:58.336507    9948 command_runner.go:130] >   RenewTime:       Mon, 27 Jan 2025 12:32:39 +0000
	I0127 12:36:58.336507    9948 command_runner.go:130] > Conditions:
	I0127 12:36:58.336507    9948 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0127 12:36:58.336507    9948 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0127 12:36:58.336507    9948 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 27 Jan 2025 12:28:32 +0000   Mon, 27 Jan 2025 12:36:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:58.336507    9948 command_runner.go:130] >   DiskPressure     Unknown   Mon, 27 Jan 2025 12:28:32 +0000   Mon, 27 Jan 2025 12:36:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:58.336507    9948 command_runner.go:130] >   PIDPressure      Unknown   Mon, 27 Jan 2025 12:28:32 +0000   Mon, 27 Jan 2025 12:36:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:58.336507    9948 command_runner.go:130] >   Ready            Unknown   Mon, 27 Jan 2025 12:28:32 +0000   Mon, 27 Jan 2025 12:36:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:58.336507    9948 command_runner.go:130] > Addresses:
	I0127 12:36:58.336507    9948 command_runner.go:130] >   InternalIP:  172.29.199.129
	I0127 12:36:58.336507    9948 command_runner.go:130] >   Hostname:    multinode-659000-m02
	I0127 12:36:58.336507    9948 command_runner.go:130] > Capacity:
	I0127 12:36:58.336507    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:58.336507    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:58.336507    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:58.336507    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:58.336507    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:58.336507    9948 command_runner.go:130] > Allocatable:
	I0127 12:36:58.336507    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:58.336507    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:58.336507    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:58.336507    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:58.336507    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:58.336507    9948 command_runner.go:130] > System Info:
	I0127 12:36:58.336507    9948 command_runner.go:130] >   Machine ID:                 30ce15ff72904b54b07c49f3e2f28802
	I0127 12:36:58.336507    9948 command_runner.go:130] >   System UUID:                b6923799-fa1e-b54c-9340-50dd6a2378f5
	I0127 12:36:58.336507    9948 command_runner.go:130] >   Boot ID:                    3308d183-ec79-4aeb-9d90-80d47cdbff63
	I0127 12:36:58.336507    9948 command_runner.go:130] >   Kernel Version:             5.10.207
	I0127 12:36:58.336507    9948 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0127 12:36:58.336507    9948 command_runner.go:130] >   Operating System:           linux
	I0127 12:36:58.336507    9948 command_runner.go:130] >   Architecture:               amd64
	I0127 12:36:58.336507    9948 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0127 12:36:58.336507    9948 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0127 12:36:58.336507    9948 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0127 12:36:58.337030    9948 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0127 12:36:58.337030    9948 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0127 12:36:58.337030    9948 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0127 12:36:58.337086    9948 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0127 12:36:58.337154    9948 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0127 12:36:58.337154    9948 command_runner.go:130] >   default                     busybox-58667487b6-ktfxc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0127 12:36:58.337183    9948 command_runner.go:130] >   kube-system                 kindnet-n7vjl               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	I0127 12:36:58.337214    9948 command_runner.go:130] >   kube-system                 kube-proxy-pjhc8            0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0127 12:36:58.337214    9948 command_runner.go:130] > Allocated resources:
	I0127 12:36:58.337214    9948 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0127 12:36:58.337214    9948 command_runner.go:130] >   Resource           Requests   Limits
	I0127 12:36:58.337214    9948 command_runner.go:130] >   --------           --------   ------
	I0127 12:36:58.337214    9948 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0127 12:36:58.337214    9948 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0127 12:36:58.337214    9948 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0127 12:36:58.337214    9948 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0127 12:36:58.337309    9948 command_runner.go:130] > Events:
	I0127 12:36:58.337309    9948 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0127 12:36:58.337309    9948 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0127 12:36:58.337338    9948 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0127 12:36:58.337338    9948 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-659000-m02 status is now: NodeHasSufficientMemory
	I0127 12:36:58.337338    9948 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-659000-m02 status is now: NodeHasNoDiskPressure
	I0127 12:36:58.337338    9948 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-659000-m02 status is now: NodeHasSufficientPID
	I0127 12:36:58.337338    9948 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:58.337338    9948 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-659000-m02 event: Registered Node multinode-659000-m02 in Controller
	I0127 12:36:58.337338    9948 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-659000-m02 status is now: NodeReady
	I0127 12:36:58.337338    9948 command_runner.go:130] >   Normal  RegisteredNode           74s                node-controller  Node multinode-659000-m02 event: Registered Node multinode-659000-m02 in Controller
	I0127 12:36:58.337338    9948 command_runner.go:130] >   Normal  NodeNotReady             24s                node-controller  Node multinode-659000-m02 status is now: NodeNotReady
	I0127 12:36:58.337338    9948 command_runner.go:130] > Name:               multinode-659000-m03
	I0127 12:36:58.337338    9948 command_runner.go:130] > Roles:              <none>
	I0127 12:36:58.337338    9948 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0127 12:36:58.337338    9948 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0127 12:36:58.337338    9948 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0127 12:36:58.337338    9948 command_runner.go:130] >                     kubernetes.io/hostname=multinode-659000-m03
	I0127 12:36:58.337338    9948 command_runner.go:130] >                     kubernetes.io/os=linux
	I0127 12:36:58.337338    9948 command_runner.go:130] >                     minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	I0127 12:36:58.337338    9948 command_runner.go:130] >                     minikube.k8s.io/name=multinode-659000
	I0127 12:36:58.337338    9948 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0127 12:36:58.337338    9948 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_01_27T12_31_04_0700
	I0127 12:36:58.337338    9948 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0127 12:36:58.337338    9948 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0127 12:36:58.337338    9948 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0127 12:36:58.337338    9948 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0127 12:36:58.337338    9948 command_runner.go:130] > CreationTimestamp:  Mon, 27 Jan 2025 12:31:04 +0000
	I0127 12:36:58.337338    9948 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0127 12:36:58.337338    9948 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0127 12:36:58.337338    9948 command_runner.go:130] > Unschedulable:      false
	I0127 12:36:58.337338    9948 command_runner.go:130] > Lease:
	I0127 12:36:58.337338    9948 command_runner.go:130] >   HolderIdentity:  multinode-659000-m03
	I0127 12:36:58.337338    9948 command_runner.go:130] >   AcquireTime:     <unset>
	I0127 12:36:58.337338    9948 command_runner.go:130] >   RenewTime:       Mon, 27 Jan 2025 12:32:15 +0000
	I0127 12:36:58.337338    9948 command_runner.go:130] > Conditions:
	I0127 12:36:58.337338    9948 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0127 12:36:58.337338    9948 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0127 12:36:58.337338    9948 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 27 Jan 2025 12:31:22 +0000   Mon, 27 Jan 2025 12:33:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:58.337338    9948 command_runner.go:130] >   DiskPressure     Unknown   Mon, 27 Jan 2025 12:31:22 +0000   Mon, 27 Jan 2025 12:33:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:58.337338    9948 command_runner.go:130] >   PIDPressure      Unknown   Mon, 27 Jan 2025 12:31:22 +0000   Mon, 27 Jan 2025 12:33:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:58.337338    9948 command_runner.go:130] >   Ready            Unknown   Mon, 27 Jan 2025 12:31:22 +0000   Mon, 27 Jan 2025 12:33:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:58.337338    9948 command_runner.go:130] > Addresses:
	I0127 12:36:58.337338    9948 command_runner.go:130] >   InternalIP:  172.29.206.88
	I0127 12:36:58.337338    9948 command_runner.go:130] >   Hostname:    multinode-659000-m03
	I0127 12:36:58.337338    9948 command_runner.go:130] > Capacity:
	I0127 12:36:58.337338    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:58.337338    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:58.337866    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:58.337866    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:58.337925    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:58.337925    9948 command_runner.go:130] > Allocatable:
	I0127 12:36:58.337925    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:58.337925    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:58.337925    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:58.337925    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:58.337925    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:58.337925    9948 command_runner.go:130] > System Info:
	I0127 12:36:58.337925    9948 command_runner.go:130] >   Machine ID:                 5cd7b7bdbad940e0831e949f70fdd5af
	I0127 12:36:58.337925    9948 command_runner.go:130] >   System UUID:                bab0a90b-9ed8-ba42-88b9-fc6568ad7a53
	I0127 12:36:58.338031    9948 command_runner.go:130] >   Boot ID:                    9d0d04c8-71ef-487a-a13c-e1de6463b3fe
	I0127 12:36:58.338031    9948 command_runner.go:130] >   Kernel Version:             5.10.207
	I0127 12:36:58.338031    9948 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0127 12:36:58.338031    9948 command_runner.go:130] >   Operating System:           linux
	I0127 12:36:58.338031    9948 command_runner.go:130] >   Architecture:               amd64
	I0127 12:36:58.338107    9948 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0127 12:36:58.338107    9948 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0127 12:36:58.338128    9948 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0127 12:36:58.338155    9948 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0127 12:36:58.338155    9948 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0127 12:36:58.338155    9948 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0127 12:36:58.338155    9948 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0127 12:36:58.338155    9948 command_runner.go:130] >   kube-system                 kindnet-kpfjt       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      17m
	I0127 12:36:58.338155    9948 command_runner.go:130] >   kube-system                 kube-proxy-sk5js    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	I0127 12:36:58.338155    9948 command_runner.go:130] > Allocated resources:
	I0127 12:36:58.338155    9948 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Resource           Requests   Limits
	I0127 12:36:58.338155    9948 command_runner.go:130] >   --------           --------   ------
	I0127 12:36:58.338155    9948 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0127 12:36:58.338155    9948 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0127 12:36:58.338155    9948 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0127 12:36:58.338155    9948 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0127 12:36:58.338155    9948 command_runner.go:130] > Events:
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0127 12:36:58.338155    9948 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Normal  Starting                 5m50s                  kube-proxy       
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Normal  NodeHasSufficientMemory  17m (x2 over 17m)      kubelet          Node multinode-659000-m03 status is now: NodeHasSufficientMemory
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Normal  NodeHasSufficientPID     17m (x2 over 17m)      kubelet          Node multinode-659000-m03 status is now: NodeHasSufficientPID
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Normal  NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    17m (x2 over 17m)      kubelet          Node multinode-659000-m03 status is now: NodeHasNoDiskPressure
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-659000-m03 status is now: NodeReady
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Normal  Starting                 5m55s                  kubelet          Starting kubelet.
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Normal  CIDRAssignmentFailed     5m54s                  cidrAllocator    Node multinode-659000-m03 status is now: CIDRAssignmentFailed
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m54s (x2 over 5m54s)  kubelet          Node multinode-659000-m03 status is now: NodeHasSufficientMemory
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m54s (x2 over 5m54s)  kubelet          Node multinode-659000-m03 status is now: NodeHasNoDiskPressure
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m54s (x2 over 5m54s)  kubelet          Node multinode-659000-m03 status is now: NodeHasSufficientPID
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m54s                  kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:58.338677    9948 command_runner.go:130] >   Normal  RegisteredNode           5m50s                  node-controller  Node multinode-659000-m03 event: Registered Node multinode-659000-m03 in Controller
	I0127 12:36:58.338733    9948 command_runner.go:130] >   Normal  NodeReady                5m36s                  kubelet          Node multinode-659000-m03 status is now: NodeReady
	I0127 12:36:58.338733    9948 command_runner.go:130] >   Normal  NodeNotReady             3m50s                  node-controller  Node multinode-659000-m03 status is now: NodeNotReady
	I0127 12:36:58.338733    9948 command_runner.go:130] >   Normal  RegisteredNode           74s                    node-controller  Node multinode-659000-m03 event: Registered Node multinode-659000-m03 in Controller
	I0127 12:36:58.348475    9948 logs.go:123] Gathering logs for kube-proxy [bbec7ccef7da] ...
	I0127 12:36:58.348475    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbec7ccef7da"
	I0127 12:36:58.383936    9948 command_runner.go:130] ! I0127 12:12:05.290111       1 server_linux.go:66] "Using iptables proxy"
	I0127 12:36:58.383936    9948 command_runner.go:130] ! E0127 12:12:05.321300       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0127 12:36:58.383936    9948 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0127 12:36:58.383936    9948 command_runner.go:130] ! 	add table ip kube-proxy
	I0127 12:36:58.383936    9948 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:58.383936    9948 command_runner.go:130] !  >
	I0127 12:36:58.383936    9948 command_runner.go:130] ! E0127 12:12:05.352123       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0127 12:36:58.383936    9948 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0127 12:36:58.383936    9948 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0127 12:36:58.383936    9948 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:58.383936    9948 command_runner.go:130] !  >
	I0127 12:36:58.383936    9948 command_runner.go:130] ! I0127 12:12:05.378799       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.204.17"]
	I0127 12:36:58.383936    9948 command_runner.go:130] ! E0127 12:12:05.378872       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 12:36:58.383936    9948 command_runner.go:130] ! I0127 12:12:05.470419       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 12:36:58.383936    9948 command_runner.go:130] ! I0127 12:12:05.470552       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 12:36:58.383936    9948 command_runner.go:130] ! I0127 12:12:05.470596       1 server_linux.go:170] "Using iptables Proxier"
	I0127 12:36:58.383936    9948 command_runner.go:130] ! I0127 12:12:05.475557       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 12:36:58.383936    9948 command_runner.go:130] ! I0127 12:12:05.476697       1 server.go:497] "Version info" version="v1.32.1"
	I0127 12:36:58.383936    9948 command_runner.go:130] ! I0127 12:12:05.476717       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:58.383936    9948 command_runner.go:130] ! I0127 12:12:05.478788       1 config.go:199] "Starting service config controller"
	I0127 12:36:58.384955    9948 command_runner.go:130] ! I0127 12:12:05.478844       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 12:36:58.384955    9948 command_runner.go:130] ! I0127 12:12:05.478916       1 config.go:105] "Starting endpoint slice config controller"
	I0127 12:36:58.384955    9948 command_runner.go:130] ! I0127 12:12:05.479018       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 12:36:58.384955    9948 command_runner.go:130] ! I0127 12:12:05.480053       1 config.go:329] "Starting node config controller"
	I0127 12:36:58.384955    9948 command_runner.go:130] ! I0127 12:12:05.480113       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 12:36:58.384955    9948 command_runner.go:130] ! I0127 12:12:05.579605       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 12:36:58.384955    9948 command_runner.go:130] ! I0127 12:12:05.579669       1 shared_informer.go:320] Caches are synced for service config
	I0127 12:36:58.384955    9948 command_runner.go:130] ! I0127 12:12:05.580463       1 shared_informer.go:320] Caches are synced for node config
	I0127 12:36:58.387934    9948 logs.go:123] Gathering logs for kube-controller-manager [e07a66f8f619] ...
	I0127 12:36:58.387934    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e07a66f8f619"
	I0127 12:36:58.422542    9948 command_runner.go:130] ! I0127 12:11:53.668834       1 serving.go:386] Generated self-signed cert in-memory
	I0127 12:36:58.422617    9948 command_runner.go:130] ! I0127 12:11:53.986868       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0127 12:36:58.422638    9948 command_runner.go:130] ! I0127 12:11:53.987309       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:58.422638    9948 command_runner.go:130] ! I0127 12:11:53.989401       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0127 12:36:58.422638    9948 command_runner.go:130] ! I0127 12:11:53.990012       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:58.422638    9948 command_runner.go:130] ! I0127 12:11:53.990187       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 12:36:58.422698    9948 command_runner.go:130] ! I0127 12:11:53.990322       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:58.422723    9948 command_runner.go:130] ! I0127 12:11:58.581695       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0127 12:36:58.422723    9948 command_runner.go:130] ! I0127 12:11:58.581741       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0127 12:36:58.422723    9948 command_runner.go:130] ! I0127 12:11:58.615284       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0127 12:36:58.422723    9948 command_runner.go:130] ! I0127 12:11:58.615497       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0127 12:36:58.422805    9948 command_runner.go:130] ! I0127 12:11:58.615545       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0127 12:36:58.422805    9948 command_runner.go:130] ! I0127 12:11:58.626456       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0127 12:36:58.422805    9948 command_runner.go:130] ! I0127 12:11:58.626896       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0127 12:36:58.422911    9948 command_runner.go:130] ! I0127 12:11:58.626952       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0127 12:36:58.422931    9948 command_runner.go:130] ! I0127 12:11:58.636784       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0127 12:36:58.422931    9948 command_runner.go:130] ! I0127 12:11:58.636866       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0127 12:36:58.422983    9948 command_runner.go:130] ! I0127 12:11:58.637077       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0127 12:36:58.423004    9948 command_runner.go:130] ! I0127 12:11:58.637108       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0127 12:36:58.423004    9948 command_runner.go:130] ! I0127 12:11:58.649619       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0127 12:36:58.423004    9948 command_runner.go:130] ! I0127 12:11:58.649750       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0127 12:36:58.423004    9948 command_runner.go:130] ! I0127 12:11:58.649765       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0127 12:36:58.423089    9948 command_runner.go:130] ! I0127 12:11:58.650223       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0127 12:36:58.423089    9948 command_runner.go:130] ! I0127 12:11:58.650457       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0127 12:36:58.423155    9948 command_runner.go:130] ! I0127 12:11:58.682646       1 shared_informer.go:320] Caches are synced for tokens
	I0127 12:36:58.423155    9948 command_runner.go:130] ! I0127 12:11:58.684061       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0127 12:36:58.423155    9948 command_runner.go:130] ! I0127 12:11:58.684098       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0127 12:36:58.423234    9948 command_runner.go:130] ! I0127 12:11:58.698781       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0127 12:36:58.423234    9948 command_runner.go:130] ! I0127 12:11:58.699001       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0127 12:36:58.423234    9948 command_runner.go:130] ! I0127 12:11:58.699050       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0127 12:36:58.423234    9948 command_runner.go:130] ! I0127 12:11:58.699060       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0127 12:36:58.423288    9948 command_runner.go:130] ! I0127 12:11:58.720187       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0127 12:36:58.423308    9948 command_runner.go:130] ! I0127 12:11:58.720450       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0127 12:36:58.423308    9948 command_runner.go:130] ! I0127 12:11:58.725202       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0127 12:36:58.423390    9948 command_runner.go:130] ! I0127 12:11:58.736652       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0127 12:36:58.423390    9948 command_runner.go:130] ! I0127 12:11:58.737667       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0127 12:36:58.423460    9948 command_runner.go:130] ! I0127 12:11:58.738017       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0127 12:36:58.423483    9948 command_runner.go:130] ! I0127 12:11:58.758863       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0127 12:36:58.423483    9948 command_runner.go:130] ! I0127 12:11:58.759137       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0127 12:36:58.423483    9948 command_runner.go:130] ! I0127 12:11:58.759589       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0127 12:36:58.423536    9948 command_runner.go:130] ! I0127 12:11:58.759751       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0127 12:36:58.423536    9948 command_runner.go:130] ! I0127 12:11:58.778737       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0127 12:36:58.423536    9948 command_runner.go:130] ! I0127 12:11:58.779301       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0127 12:36:58.423536    9948 command_runner.go:130] ! I0127 12:11:58.794263       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0127 12:36:58.423603    9948 command_runner.go:130] ! I0127 12:11:58.805098       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0127 12:36:58.423639    9948 command_runner.go:130] ! I0127 12:11:58.805155       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0127 12:36:58.423639    9948 command_runner.go:130] ! I0127 12:11:58.805917       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0127 12:36:58.423639    9948 command_runner.go:130] ! I0127 12:11:58.889766       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0127 12:36:58.423695    9948 command_runner.go:130] ! I0127 12:11:58.889864       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0127 12:36:58.423716    9948 command_runner.go:130] ! I0127 12:11:58.889880       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0127 12:36:58.423716    9948 command_runner.go:130] ! I0127 12:11:59.169736       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0127 12:36:58.423716    9948 command_runner.go:130] ! I0127 12:11:59.169792       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0127 12:36:58.423716    9948 command_runner.go:130] ! I0127 12:11:59.169804       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0127 12:36:58.423807    9948 command_runner.go:130] ! I0127 12:11:59.292507       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0127 12:36:58.423807    9948 command_runner.go:130] ! I0127 12:11:59.292665       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0127 12:36:58.423807    9948 command_runner.go:130] ! I0127 12:11:59.292680       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0127 12:36:58.423865    9948 command_runner.go:130] ! I0127 12:11:59.451231       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0127 12:36:58.423890    9948 command_runner.go:130] ! I0127 12:11:59.451328       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:11:59.451387       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:11:59.451649       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:11:59.594702       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:11:59.594829       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:11:59.595498       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:11:59.595889       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:11:59.744969       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:11:59.745617       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:11:59.745871       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:11:59.892444       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:11:59.892907       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:11:59.893093       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.136328       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.136634       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.136654       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.136681       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.425858       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.426027       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.426047       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.426160       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.426327       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.426356       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.685414       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.685471       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.685482       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.841490       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.841888       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.841953       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.888027       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.888135       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.888174       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.889767       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.889893       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.889957       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0127 12:36:58.424447    9948 command_runner.go:130] ! I0127 12:12:00.890020       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0127 12:36:58.424487    9948 command_runner.go:130] ! I0127 12:12:00.890047       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0127 12:36:58.424487    9948 command_runner.go:130] ! I0127 12:12:00.890072       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0127 12:36:58.424487    9948 command_runner.go:130] ! I0127 12:12:00.890079       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0127 12:36:58.424487    9948 command_runner.go:130] ! I0127 12:12:00.890101       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:58.424584    9948 command_runner.go:130] ! I0127 12:12:00.890256       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:58.424584    9948 command_runner.go:130] ! I0127 12:12:00.890391       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:58.424584    9948 command_runner.go:130] ! I0127 12:12:01.042988       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0127 12:36:58.424584    9948 command_runner.go:130] ! I0127 12:12:01.043513       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0127 12:36:58.424651    9948 command_runner.go:130] ! I0127 12:12:01.043602       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0127 12:36:58.424651    9948 command_runner.go:130] ! I0127 12:12:01.043761       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0127 12:36:58.424651    9948 command_runner.go:130] ! W0127 12:12:01.189051       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0127 12:36:58.424709    9948 command_runner.go:130] ! I0127 12:12:01.192613       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0127 12:36:58.424709    9948 command_runner.go:130] ! I0127 12:12:01.192663       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0127 12:36:58.424709    9948 command_runner.go:130] ! I0127 12:12:01.193062       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0127 12:36:58.424709    9948 command_runner.go:130] ! I0127 12:12:01.193147       1 shared_informer.go:313] Waiting for caches to sync for node
	I0127 12:36:58.424709    9948 command_runner.go:130] ! I0127 12:12:01.493812       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0127 12:36:58.424807    9948 command_runner.go:130] ! I0127 12:12:01.493885       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0127 12:36:58.424807    9948 command_runner.go:130] ! I0127 12:12:01.493919       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0127 12:36:58.424867    9948 command_runner.go:130] ! I0127 12:12:01.494208       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0127 12:36:58.424867    9948 command_runner.go:130] ! I0127 12:12:01.494371       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0127 12:36:58.424867    9948 command_runner.go:130] ! I0127 12:12:01.494391       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0127 12:36:58.424950    9948 command_runner.go:130] ! I0127 12:12:01.494413       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0127 12:36:58.424976    9948 command_runner.go:130] ! I0127 12:12:01.494456       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0127 12:36:58.425030    9948 command_runner.go:130] ! I0127 12:12:01.494473       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0127 12:36:58.425055    9948 command_runner.go:130] ! I0127 12:12:01.494487       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0127 12:36:58.425055    9948 command_runner.go:130] ! I0127 12:12:01.494531       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0127 12:36:58.425055    9948 command_runner.go:130] ! I0127 12:12:01.494547       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0127 12:36:58.425114    9948 command_runner.go:130] ! I0127 12:12:01.494617       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0127 12:36:58.425114    9948 command_runner.go:130] ! I0127 12:12:01.494687       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0127 12:36:58.425217    9948 command_runner.go:130] ! I0127 12:12:01.494717       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0127 12:36:58.425217    9948 command_runner.go:130] ! I0127 12:12:01.494749       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0127 12:36:58.425217    9948 command_runner.go:130] ! I0127 12:12:01.494763       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0127 12:36:58.425294    9948 command_runner.go:130] ! I0127 12:12:01.494781       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0127 12:36:58.425345    9948 command_runner.go:130] ! I0127 12:12:01.494815       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0127 12:36:58.425385    9948 command_runner.go:130] ! I0127 12:12:01.494890       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:01.495196       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:01.495268       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:01.495404       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:01.495519       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:01.640900       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:01.641423       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:01.641492       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:01.789671       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:01.790209       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:01.790224       1 shared_informer.go:313] Waiting for caches to sync for job
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:01.939873       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:01.940295       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:01.940375       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.099155       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.099654       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.099741       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.240427       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.240688       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.240725       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.390343       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.390438       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.390450       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.539643       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.539766       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.539778       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.691835       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.691969       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.739108       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.739143       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.739157       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.739400       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.739775       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.740069       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.890126       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.890235       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.890247       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0127 12:36:58.425947    9948 command_runner.go:130] ! I0127 12:12:03.040125       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0127 12:36:58.425947    9948 command_runner.go:130] ! I0127 12:12:03.040770       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0127 12:36:58.425947    9948 command_runner.go:130] ! I0127 12:12:03.040983       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0127 12:36:58.426021    9948 command_runner.go:130] ! I0127 12:12:03.063768       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0127 12:36:58.426021    9948 command_runner.go:130] ! I0127 12:12:03.092877       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0127 12:36:58.426077    9948 command_runner.go:130] ! I0127 12:12:03.093448       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0127 12:36:58.426077    9948 command_runner.go:130] ! I0127 12:12:03.110720       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000\" does not exist"
	I0127 12:36:58.426144    9948 command_runner.go:130] ! I0127 12:12:03.126986       1 shared_informer.go:320] Caches are synced for endpoint
	I0127 12:36:58.426144    9948 command_runner.go:130] ! I0127 12:12:03.127087       1 shared_informer.go:320] Caches are synced for taint
	I0127 12:36:58.426174    9948 command_runner.go:130] ! I0127 12:12:03.127203       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0127 12:36:58.426216    9948 command_runner.go:130] ! I0127 12:12:03.127313       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000"
	I0127 12:36:58.426216    9948 command_runner.go:130] ! I0127 12:12:03.127524       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0127 12:36:58.426216    9948 command_runner.go:130] ! I0127 12:12:03.137503       1 shared_informer.go:320] Caches are synced for service account
	I0127 12:36:58.426283    9948 command_runner.go:130] ! I0127 12:12:03.137554       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:36:58.426283    9948 command_runner.go:130] ! I0127 12:12:03.138208       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0127 12:36:58.426283    9948 command_runner.go:130] ! I0127 12:12:03.138217       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0127 12:36:58.426283    9948 command_runner.go:130] ! I0127 12:12:03.138352       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0127 12:36:58.426347    9948 command_runner.go:130] ! I0127 12:12:03.141127       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0127 12:36:58.426373    9948 command_runner.go:130] ! I0127 12:12:03.141405       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0127 12:36:58.426373    9948 command_runner.go:130] ! I0127 12:12:03.141415       1 shared_informer.go:320] Caches are synced for TTL
	I0127 12:36:58.426373    9948 command_runner.go:130] ! I0127 12:12:03.141424       1 shared_informer.go:320] Caches are synced for ephemeral
	I0127 12:36:58.426427    9948 command_runner.go:130] ! I0127 12:12:03.141607       1 shared_informer.go:320] Caches are synced for daemon sets
	I0127 12:36:58.426451    9948 command_runner.go:130] ! I0127 12:12:03.141617       1 shared_informer.go:320] Caches are synced for stateful set
	I0127 12:36:58.426451    9948 command_runner.go:130] ! I0127 12:12:03.142442       1 shared_informer.go:320] Caches are synced for cronjob
	I0127 12:36:58.426451    9948 command_runner.go:130] ! I0127 12:12:03.146511       1 shared_informer.go:320] Caches are synced for persistent volume
	I0127 12:36:58.426506    9948 command_runner.go:130] ! I0127 12:12:03.150765       1 shared_informer.go:320] Caches are synced for expand
	I0127 12:36:58.426506    9948 command_runner.go:130] ! I0127 12:12:03.152122       1 shared_informer.go:320] Caches are synced for PVC protection
	I0127 12:36:58.426530    9948 command_runner.go:130] ! I0127 12:12:03.160180       1 shared_informer.go:320] Caches are synced for GC
	I0127 12:36:58.426530    9948 command_runner.go:130] ! I0127 12:12:03.164570       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:36:58.426585    9948 command_runner.go:130] ! I0127 12:12:03.170520       1 shared_informer.go:320] Caches are synced for namespace
	I0127 12:36:58.426585    9948 command_runner.go:130] ! I0127 12:12:03.185040       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0127 12:36:58.426585    9948 command_runner.go:130] ! I0127 12:12:03.186131       1 shared_informer.go:320] Caches are synced for HPA
	I0127 12:36:58.426585    9948 command_runner.go:130] ! I0127 12:12:03.188683       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0127 12:36:58.426648    9948 command_runner.go:130] ! I0127 12:12:03.191196       1 shared_informer.go:320] Caches are synced for crt configmap
	I0127 12:36:58.426648    9948 command_runner.go:130] ! I0127 12:12:03.192089       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0127 12:36:58.426648    9948 command_runner.go:130] ! I0127 12:12:03.192497       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0127 12:36:58.426648    9948 command_runner.go:130] ! I0127 12:12:03.192682       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0127 12:36:58.426648    9948 command_runner.go:130] ! I0127 12:12:03.192862       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0127 12:36:58.426648    9948 command_runner.go:130] ! I0127 12:12:03.193013       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:36:58.426648    9948 command_runner.go:130] ! I0127 12:12:03.193030       1 shared_informer.go:320] Caches are synced for job
	I0127 12:36:58.426648    9948 command_runner.go:130] ! I0127 12:12:03.193151       1 shared_informer.go:320] Caches are synced for deployment
	I0127 12:36:58.426884    9948 command_runner.go:130] ! I0127 12:12:03.193982       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0127 12:36:58.426913    9948 command_runner.go:130] ! I0127 12:12:03.194157       1 shared_informer.go:320] Caches are synced for node
	I0127 12:36:58.426913    9948 command_runner.go:130] ! I0127 12:12:03.194244       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0127 12:36:58.426913    9948 command_runner.go:130] ! I0127 12:12:03.194281       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0127 12:36:58.426913    9948 command_runner.go:130] ! I0127 12:12:03.194310       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0127 12:36:58.426981    9948 command_runner.go:130] ! I0127 12:12:03.194318       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0127 12:36:58.427009    9948 command_runner.go:130] ! I0127 12:12:03.194846       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0127 12:36:58.427009    9948 command_runner.go:130] ! I0127 12:12:03.196614       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:36:58.427009    9948 command_runner.go:130] ! I0127 12:12:03.197111       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0127 12:36:58.427009    9948 command_runner.go:130] ! I0127 12:12:03.197095       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0127 12:36:58.427110    9948 command_runner.go:130] ! I0127 12:12:03.199168       1 shared_informer.go:320] Caches are synced for disruption
	I0127 12:36:58.427130    9948 command_runner.go:130] ! I0127 12:12:03.200153       1 shared_informer.go:320] Caches are synced for attach detach
	I0127 12:36:58.427130    9948 command_runner.go:130] ! I0127 12:12:03.207229       1 shared_informer.go:320] Caches are synced for PV protection
	I0127 12:36:58.427130    9948 command_runner.go:130] ! I0127 12:12:03.214016       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-659000" podCIDRs=["10.244.0.0/24"]
	I0127 12:36:58.427130    9948 command_runner.go:130] ! I0127 12:12:03.214057       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.427231    9948 command_runner.go:130] ! I0127 12:12:03.214083       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:03.216325       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:03.840748       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:04.356274       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="345.711056ms"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:04.454747       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="97.841105ms"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:04.534437       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="79.56576ms"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:04.576528       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="41.959673ms"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:04.576771       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="53.3µs"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:26.045035       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:26.074083       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:26.085407       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="109.3µs"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:26.129584       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="119.3µs"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:27.964629       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="49.302µs"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:28.020606       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="31.923176ms"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:28.020971       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="110.703µs"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:28.132341       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:29.790464       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:15:07.611410       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m02\" does not exist"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:15:07.630009       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-659000-m02" podCIDRs=["10.244.1.0/24"]
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:15:07.631297       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:15:07.631526       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:15:07.655401       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:15:07.883346       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:15:08.169505       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000-m02"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:15:08.255644       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:15:08.418223       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.427835    9948 command_runner.go:130] ! I0127 12:15:17.811768       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.427835    9948 command_runner.go:130] ! I0127 12:15:36.752543       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:58.427835    9948 command_runner.go:130] ! I0127 12:15:36.753915       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:15:36.769807       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:15:38.199464       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:15:38.449749       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:16:02.550786       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="103.313802ms"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:16:02.585867       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="34.67067ms"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:16:02.586257       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="347.903µs"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:16:02.588870       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="48.6µs"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:16:05.434486       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="13.589639ms"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:16:05.435765       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="54.401µs"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:16:05.890170       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="9.003392ms"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:16:05.890477       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="36.901µs"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:16:09.305780       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:16:33.434322       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:19:26.820887       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:19:54.916460       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:19:54.917420       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m03\" does not exist"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:19:54.965530       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-659000-m03" podCIDRs=["10.244.2.0/24"]
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:19:54.966061       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:19:54.966297       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:19:55.802981       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:19:56.378698       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:19:58.252320       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000-m03"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:19:58.280410       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:20:05.560777       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:20:25.959831       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.428470    9948 command_runner.go:130] ! I0127 12:20:28.750598       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.428470    9948 command_runner.go:130] ! I0127 12:20:28.751325       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:58.428575    9948 command_runner.go:130] ! I0127 12:20:28.769163       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.428575    9948 command_runner.go:130] ! I0127 12:20:33.279397       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.428575    9948 command_runner.go:130] ! I0127 12:23:26.795899       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.428575    9948 command_runner.go:130] ! I0127 12:24:32.956118       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.428662    9948 command_runner.go:130] ! I0127 12:25:42.001288       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.428691    9948 command_runner.go:130] ! I0127 12:28:32.628178       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:28:38.397672       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:28:38.399092       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:28:38.428451       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:28:43.510900       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:29:38.000555       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:30:52.866288       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:30:52.895359       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:30:58.140304       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:31:04.208510       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:31:04.209007       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m03\" does not exist"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:31:04.238560       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-659000-m03" podCIDRs=["10.244.3.0/24"]
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:31:04.238634       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! E0127 12:31:04.255963       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-659000-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-659000-m03" podCIDRs=["10.244.4.0/24"]
	I0127 12:36:58.428747    9948 command_runner.go:130] ! E0127 12:31:04.256068       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-659000-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-659000-m03"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! E0127 12:31:04.256109       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'multinode-659000-m03': failed to patch node CIDR: Node \"multinode-659000-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:31:04.256134       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:31:04.261242       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:31:04.513319       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:31:05.081710       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.429345    9948 command_runner.go:130] ! I0127 12:31:08.523576       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.429394    9948 command_runner.go:130] ! I0127 12:31:14.394811       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.429394    9948 command_runner.go:130] ! I0127 12:31:22.407069       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:58.429394    9948 command_runner.go:130] ! I0127 12:31:22.407472       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.429394    9948 command_runner.go:130] ! I0127 12:31:22.419743       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.429394    9948 command_runner.go:130] ! I0127 12:31:23.498434       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.429394    9948 command_runner.go:130] ! I0127 12:33:08.544063       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:58.429394    9948 command_runner.go:130] ! I0127 12:33:08.544656       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.429394    9948 command_runner.go:130] ! I0127 12:33:08.574301       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.429394    9948 command_runner.go:130] ! I0127 12:33:13.661256       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.451155    9948 logs.go:123] Gathering logs for Docker ...
	I0127 12:36:58.451155    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0127 12:36:58.479838    9948 command_runner.go:130] > Jan 27 12:34:11 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0127 12:36:58.479838    9948 command_runner.go:130] > Jan 27 12:34:11 minikube cri-dockerd[223]: time="2025-01-27T12:34:11Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0127 12:36:58.479838    9948 command_runner.go:130] > Jan 27 12:34:11 minikube cri-dockerd[223]: time="2025-01-27T12:34:11Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0127 12:36:58.480028    9948 command_runner.go:130] > Jan 27 12:34:11 minikube cri-dockerd[223]: time="2025-01-27T12:34:11Z" level=info msg="Start docker client with request timeout 0s"
	I0127 12:36:58.480028    9948 command_runner.go:130] > Jan 27 12:34:11 minikube cri-dockerd[223]: time="2025-01-27T12:34:11Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0127 12:36:58.480028    9948 command_runner.go:130] > Jan 27 12:34:11 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:58.480099    9948 command_runner.go:130] > Jan 27 12:34:11 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0127 12:36:58.480099    9948 command_runner.go:130] > Jan 27 12:34:11 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0127 12:36:58.480099    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0127 12:36:58.480099    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0127 12:36:58.480099    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0127 12:36:58.480099    9948 command_runner.go:130] > Jan 27 12:34:14 minikube cri-dockerd[404]: time="2025-01-27T12:34:14Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0127 12:36:58.480099    9948 command_runner.go:130] > Jan 27 12:34:14 minikube cri-dockerd[404]: time="2025-01-27T12:34:14Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0127 12:36:58.480099    9948 command_runner.go:130] > Jan 27 12:34:14 minikube cri-dockerd[404]: time="2025-01-27T12:34:14Z" level=info msg="Start docker client with request timeout 0s"
	I0127 12:36:58.480231    9948 command_runner.go:130] > Jan 27 12:34:14 minikube cri-dockerd[404]: time="2025-01-27T12:34:14Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0127 12:36:58.480231    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:58.480231    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0127 12:36:58.480231    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0127 12:36:58.480302    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0127 12:36:58.480330    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0127 12:36:58.480330    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0127 12:36:58.480330    9948 command_runner.go:130] > Jan 27 12:34:16 minikube cri-dockerd[425]: time="2025-01-27T12:34:16Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0127 12:36:58.480330    9948 command_runner.go:130] > Jan 27 12:34:16 minikube cri-dockerd[425]: time="2025-01-27T12:34:16Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0127 12:36:58.480429    9948 command_runner.go:130] > Jan 27 12:34:16 minikube cri-dockerd[425]: time="2025-01-27T12:34:16Z" level=info msg="Start docker client with request timeout 0s"
	I0127 12:36:58.480453    9948 command_runner.go:130] > Jan 27 12:34:16 minikube cri-dockerd[425]: time="2025-01-27T12:34:16Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0127 12:36:58.480478    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:58.480478    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 systemd[1]: Starting Docker Application Container Engine...
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[653]: time="2025-01-27T12:35:01.316616305Z" level=info msg="Starting up"
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[653]: time="2025-01-27T12:35:01.317424338Z" level=info msg="containerd not running, starting managed containerd"
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[653]: time="2025-01-27T12:35:01.318870498Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=659
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.350184287Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374094572Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374181575Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374315681Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374337282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:58.481149    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374861203Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:58.481149    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374889804Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:58.481240    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375040811Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:58.481240    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375239819Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:58.481308    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375267320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:58.481361    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375281220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:58.481361    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375833643Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:58.481361    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.376559373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:58.481361    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.379449292Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:58.481361    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.379538296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:58.483009    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.379661901Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:58.483009    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.379800807Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0127 12:36:58.483106    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.380313228Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0127 12:36:58.483106    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.380441533Z" level=info msg="metadata content store policy set" policy=shared
	I0127 12:36:58.483106    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.385960360Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0127 12:36:58.483219    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386099266Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0127 12:36:58.483246    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386121867Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0127 12:36:58.483246    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386137768Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0127 12:36:58.483307    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386151968Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0127 12:36:58.483348    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386229971Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386475981Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386600687Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386685890Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386757893Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386815695Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386833196Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386854497Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386882698Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386897399Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386908999Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386920500Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386931000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386948401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386962701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387079606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387099107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387131708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387149509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387164010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387179110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387194311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387212812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.483913    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387227412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.483913    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387242613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.483957    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387257314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.483957    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387275514Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0127 12:36:58.483957    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387300315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.483957    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387352418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.484041    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387385019Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0127 12:36:58.484092    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387423920Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0127 12:36:58.484132    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387443921Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0127 12:36:58.484132    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387454422Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0127 12:36:58.484175    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387465222Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0127 12:36:58.484239    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387473923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387486423Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387496523Z" level=info msg="NRI interface is disabled by configuration."
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.388077647Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.388176351Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.388221553Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.388239554Z" level=info msg="containerd successfully booted in 0.040630s"
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:02 multinode-659000 dockerd[653]: time="2025-01-27T12:35:02.375461301Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:02 multinode-659000 dockerd[653]: time="2025-01-27T12:35:02.619440119Z" level=info msg="Loading containers: start."
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:02 multinode-659000 dockerd[653]: time="2025-01-27T12:35:02.931712674Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.079754338Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.199112944Z" level=info msg="Loading containers: done."
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.227370410Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.227394111Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.227415612Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.227924231Z" level=info msg="Daemon has completed initialization"
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.267619030Z" level=info msg="API listen on /var/run/docker.sock"
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.267851638Z" level=info msg="API listen on [::]:2376"
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 systemd[1]: Started Docker Application Container Engine.
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.208684124Z" level=info msg="Processing signal 'terminated'"
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.210887831Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.211188432Z" level=info msg="Daemon shutdown complete"
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.211249132Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.211349733Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0127 12:36:58.484837    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 systemd[1]: Stopping Docker Application Container Engine...
	I0127 12:36:58.484837    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 systemd[1]: docker.service: Deactivated successfully.
	I0127 12:36:58.484886    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 systemd[1]: Stopped Docker Application Container Engine.
	I0127 12:36:58.484886    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 systemd[1]: Starting Docker Application Container Engine...
	I0127 12:36:58.484886    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:29.270852796Z" level=info msg="Starting up"
	I0127 12:36:58.484940    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:29.271817099Z" level=info msg="containerd not running, starting managed containerd"
	I0127 12:36:58.484940    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:29.272921603Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1109
	I0127 12:36:58.484940    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.304741210Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0127 12:36:58.485024    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329258592Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0127 12:36:58.485024    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329353092Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0127 12:36:58.485082    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329390892Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0127 12:36:58.485105    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329406192Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329428593Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329441293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329563193Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329667793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329687993Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329698693Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329723194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329854194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.332844104Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.332945004Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333117005Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333187905Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333222205Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333244905Z" level=info msg="metadata content store policy set" policy=shared
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333669407Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333741907Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333760007Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333804107Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333825507Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333876808Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334348509Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334487410Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334670410Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334694510Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334722510Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.485667    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334740210Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.485707    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334754110Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.485707    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334768211Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.485707    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334783611Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.485707    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334797111Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334827611Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334839711Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334900511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334918411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334939711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334956111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334972911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335000311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335303412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335328412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335345712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335365113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335379713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335394013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335408713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335432513Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335458213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335473813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335509613Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335706914Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335751914Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335766514Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335779214Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335790814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335808914Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335823714Z" level=info msg="NRI interface is disabled by configuration."
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.336050915Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.336227915Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.336312916Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.336356016Z" level=info msg="containerd successfully booted in 0.033394s"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.313483202Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.352802934Z" level=info msg="Loading containers: start."
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.586901421Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.690006868Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.804531453Z" level=info msg="Loading containers: done."
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.832567747Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.832684748Z" level=info msg="Daemon has completed initialization"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.868895669Z" level=info msg="API listen on /var/run/docker.sock"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 systemd[1]: Started Docker Application Container Engine.
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.869822273Z" level=info msg="API listen on [::]:2376"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Start docker client with request timeout 0s"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Loaded network plugin cni"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Start cri-dockerd grpc backend"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:36Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-58667487b6-2jq9j_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"4c82c0ec4aeaa9b21462a8248326ae982d6f7a0aee31347f1a58d216f0335177\""
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:36Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-668d6bf9bc-2qw6w_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"4a53e133a1cd6ab9514cb15ac3c4f1d5683d17008b482cebb08bf4809e060709\""
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.148610487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.149713190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.149731191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.149823291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.227312151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.227946754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.228465355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.229058857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b770a357d98307d140bf1525f91cca5fa9278f7f9428b9b956db31e6a36de7f2/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.326758786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.326897686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.327082287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.327397788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.340486032Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.340542232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.340557232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.340640833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/910315897d84204b3db03c56eaeac0c855a23f6250a406220a840c10e2dad7a7/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5601285bb260a8ced44a77e9dbb10f08580841c917885470ec5941525f08ee76/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cdf534e99b2bbcc52d3bf2ce73ef5d4299b5264cf0a050fa21ff7f6fe2bb3b2a/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.671974447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.672075247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.672094947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.673787353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.761333147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.761791949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.761989149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.763491554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.875104030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.875307231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.879314144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.879751245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.905404632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.905473732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.905487532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.905580032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:41Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:42.944884578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:42.944962279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:42.944975379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:42.945417180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.028307259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.028541060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.028779960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.029212562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.033020375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.033338176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.033463276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.033775977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/34d579bb511fec290478f20b13002063b43c1a71bd6f2f45f1d83bbd8ac971ab/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b613e9a7a356580fd5381e358408317fd6120a119c23f3f196adda302e5ca97f/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d43e4cc62e0877d4b65191623d58195cd33c60eff33c6e49e605f69620d5115f/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.564400062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.564959364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.565260665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.565864167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.593549260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.594548363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.594809964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.595677067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.831064858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.831237859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.831252459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.831462360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:14.113708902Z" level=info msg="shim disconnected" id=9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f namespace=moby
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:14.113811702Z" level=warning msg="cleaning up after shim disconnected" id=9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f namespace=moby
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:14.113825002Z" level=info msg="cleaning up dead shim" namespace=moby
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 dockerd[1103]: time="2025-01-27T12:36:14.115914814Z" level=info msg="ignoring event" container=9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:27.602318882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:27.604079090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:27.604098490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:27.604656892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.795612113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.795786714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.796654617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.796995818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.861006350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.861082751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.861094651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.861334452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:36:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6b22dbb5ef3e0d283203499fffad001c9c20c643564a55e5bfa5d6352f80e178/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:36:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ef504f99724cba01531b3894329439ae069a4ccac272e31bfac333cc24e62c53/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.321502068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.321825070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.321903471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.322491776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.384958874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.385201176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.385326577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.385735080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.519981    9948 logs.go:123] Gathering logs for etcd [0ef2a3b50bae] ...
	I0127 12:36:58.519981    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ef2a3b50bae"
	I0127 12:36:58.549005    9948 command_runner.go:130] ! {"level":"warn","ts":"2025-01-27T12:35:38.248296Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0127 12:36:58.549005    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.248523Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.29.198.106:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.29.198.106:2380","--initial-cluster=multinode-659000=https://172.29.198.106:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.29.198.106:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.29.198.106:2380","--name=multinode-659000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0127 12:36:58.549005    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.249804Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0127 12:36:58.549005    9948 command_runner.go:130] ! {"level":"warn","ts":"2025-01-27T12:35:38.249933Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0127 12:36:58.549005    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.249951Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://172.29.198.106:2380"]}
	I0127 12:36:58.549005    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.250358Z","caller":"embed/etcd.go:497","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0127 12:36:58.549005    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.255871Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.29.198.106:2379"]}
	I0127 12:36:58.549005    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.258341Z","caller":"embed/etcd.go:311","msg":"starting an etcd server","etcd-version":"3.5.16","git-sha":"f20bbad","go-version":"go1.22.7","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-659000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.29.198.106:2380"],"listen-peer-urls":["https://172.29.198.106:2380"],"advertise-client-urls":["https://172.29.198.106:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.198.106:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initi
al-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.282453Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"23.428079ms"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.322950Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.352706Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"d020e240c474bd89","local-member-id":"925e6945be3a5b5b","commit-index":2090}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.354002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b switched to configuration voters=()"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.354079Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b became follower at term 2"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.354103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 925e6945be3a5b5b [peers: [], term: 2, commit: 2090, applied: 0, lastindex: 2090, lastterm: 2]"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"warn","ts":"2025-01-27T12:35:38.367343Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.371532Z","caller":"mvcc/kvstore.go:346","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1388}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.377112Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":1808}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.386775Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.395908Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"925e6945be3a5b5b","timeout":"7s"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.396497Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"925e6945be3a5b5b"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.396684Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"925e6945be3a5b5b","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.396970Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.399309Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.401105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b switched to configuration voters=(10546983125613435739)"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.400045Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.404834Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.404888Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.405566Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d020e240c474bd89","local-member-id":"925e6945be3a5b5b","added-peer-id":"925e6945be3a5b5b","added-peer-peer-urls":["https://172.29.204.17:2380"]}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.405716Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d020e240c474bd89","local-member-id":"925e6945be3a5b5b","cluster-version":"3.5"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.405754Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.407643Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.408091Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"925e6945be3a5b5b","initial-advertise-peer-urls":["https://172.29.198.106:2380"],"listen-peer-urls":["https://172.29.198.106:2380"],"advertise-client-urls":["https://172.29.198.106:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.198.106:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.408386Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.408686Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"172.29.198.106:2380"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.408809Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"172.29.198.106:2380"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.355207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b is starting a new election at term 2"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.355615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b became pre-candidate at term 2"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.355770Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b received MsgPreVoteResp from 925e6945be3a5b5b at term 2"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.355926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b became candidate at term 3"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.356088Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b received MsgVoteResp from 925e6945be3a5b5b at term 3"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.356235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b became leader at term 3"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.356449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 925e6945be3a5b5b elected leader 925e6945be3a5b5b at term 3"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.368540Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"925e6945be3a5b5b","local-member-attributes":"{Name:multinode-659000 ClientURLs:[https://172.29.198.106:2379]}","request-path":"/0/members/925e6945be3a5b5b/attributes","cluster-id":"d020e240c474bd89","publish-timeout":"7s"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.369045Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.371833Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0127 12:36:58.551004    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.372238Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0127 12:36:58.551004    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.374158Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0127 12:36:58.551004    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.383680Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0127 12:36:58.551004    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.391404Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0127 12:36:58.551004    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.392982Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.29.198.106:2379"}
	I0127 12:36:58.551004    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.399505Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0127 12:36:58.556995    9948 logs.go:123] Gathering logs for kube-scheduler [ed51c7eaa966] ...
	I0127 12:36:58.556995    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed51c7eaa966"
	I0127 12:36:58.583234    9948 command_runner.go:130] ! I0127 12:35:39.285954       1 serving.go:386] Generated self-signed cert in-memory
	I0127 12:36:58.583234    9948 command_runner.go:130] ! W0127 12:35:41.361191       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0127 12:36:58.583234    9948 command_runner.go:130] ! W0127 12:35:41.363231       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0127 12:36:58.583234    9948 command_runner.go:130] ! W0127 12:35:41.363467       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0127 12:36:58.583234    9948 command_runner.go:130] ! W0127 12:35:41.363598       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 12:36:58.583234    9948 command_runner.go:130] ! I0127 12:35:41.458309       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 12:36:58.583234    9948 command_runner.go:130] ! I0127 12:35:41.458594       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:58.583234    9948 command_runner.go:130] ! I0127 12:35:41.465036       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 12:36:58.583234    9948 command_runner.go:130] ! I0127 12:35:41.465587       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0127 12:36:58.583234    9948 command_runner.go:130] ! I0127 12:35:41.466480       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:36:58.583234    9948 command_runner.go:130] ! I0127 12:35:41.466554       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:58.583234    9948 command_runner.go:130] ! I0127 12:35:41.567642       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:36:58.587252    9948 logs.go:123] Gathering logs for kube-scheduler [a16e06a03860] ...
	I0127 12:36:58.587252    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a16e06a03860"
	I0127 12:36:58.618437    9948 command_runner.go:130] ! I0127 12:11:54.280431       1 serving.go:386] Generated self-signed cert in-memory
	I0127 12:36:58.618543    9948 command_runner.go:130] ! W0127 12:11:55.581187       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0127 12:36:58.618593    9948 command_runner.go:130] ! W0127 12:11:55.581309       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0127 12:36:58.618593    9948 command_runner.go:130] ! W0127 12:11:55.581382       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0127 12:36:58.618593    9948 command_runner.go:130] ! W0127 12:11:55.581390       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 12:36:58.618593    9948 command_runner.go:130] ! I0127 12:11:55.694969       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 12:36:58.618593    9948 command_runner.go:130] ! I0127 12:11:55.695193       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:58.618593    9948 command_runner.go:130] ! I0127 12:11:55.700077       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0127 12:36:58.618593    9948 command_runner.go:130] ! I0127 12:11:55.700446       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 12:36:58.618593    9948 command_runner.go:130] ! I0127 12:11:55.700992       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:36:58.618593    9948 command_runner.go:130] ! I0127 12:11:55.701410       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:58.618593    9948 command_runner.go:130] ! W0127 12:11:55.715521       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:58.618593    9948 command_runner.go:130] ! E0127 12:11:55.717196       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.618593    9948 command_runner.go:130] ! W0127 12:11:55.717649       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0127 12:36:58.618593    9948 command_runner.go:130] ! E0127 12:11:55.717921       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.618593    9948 command_runner.go:130] ! W0127 12:11:55.718583       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0127 12:36:58.618593    9948 command_runner.go:130] ! E0127 12:11:55.718820       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.618593    9948 command_runner.go:130] ! W0127 12:11:55.728298       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0127 12:36:58.618593    9948 command_runner.go:130] ! E0127 12:11:55.728648       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0127 12:36:58.618593    9948 command_runner.go:130] ! W0127 12:11:55.729000       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0127 12:36:58.618593    9948 command_runner.go:130] ! E0127 12:11:55.729243       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.618593    9948 command_runner.go:130] ! W0127 12:11:55.729633       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0127 12:36:58.619119    9948 command_runner.go:130] ! E0127 12:11:55.730380       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.619162    9948 command_runner.go:130] ! W0127 12:11:55.729677       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0127 12:36:58.619197    9948 command_runner.go:130] ! E0127 12:11:55.730837       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.619197    9948 command_runner.go:130] ! W0127 12:11:55.729713       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0127 12:36:58.619197    9948 command_runner.go:130] ! W0127 12:11:55.729749       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:58.619197    9948 command_runner.go:130] ! E0127 12:11:55.731479       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.619197    9948 command_runner.go:130] ! W0127 12:11:55.729782       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:58.619979    9948 command_runner.go:130] ! E0127 12:11:55.732242       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.619979    9948 command_runner.go:130] ! W0127 12:11:55.729811       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:58.620042    9948 command_runner.go:130] ! E0127 12:11:55.734240       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.620137    9948 command_runner.go:130] ! E0127 12:11:55.734704       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.620167    9948 command_runner.go:130] ! W0127 12:11:55.738077       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0127 12:36:58.620167    9948 command_runner.go:130] ! E0127 12:11:55.738873       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.620167    9948 command_runner.go:130] ! W0127 12:11:55.739202       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0127 12:36:58.620167    9948 command_runner.go:130] ! E0127 12:11:55.739366       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.620167    9948 command_runner.go:130] ! W0127 12:11:55.739719       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0127 12:36:58.620167    9948 command_runner.go:130] ! E0127 12:11:55.739865       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.620167    9948 command_runner.go:130] ! W0127 12:11:55.740221       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0127 12:36:58.620167    9948 command_runner.go:130] ! E0127 12:11:55.740378       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.620167    9948 command_runner.go:130] ! W0127 12:11:55.740608       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:58.620167    9948 command_runner.go:130] ! E0127 12:11:55.740761       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.620167    9948 command_runner.go:130] ! W0127 12:11:56.556598       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0127 12:36:58.620167    9948 command_runner.go:130] ! E0127 12:11:56.557622       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.620167    9948 command_runner.go:130] ! W0127 12:11:56.595830       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:58.620167    9948 command_runner.go:130] ! E0127 12:11:56.596047       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.620694    9948 command_runner.go:130] ! W0127 12:11:56.691826       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0127 12:36:58.620694    9948 command_runner.go:130] ! E0127 12:11:56.691909       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0127 12:36:58.620825    9948 command_runner.go:130] ! W0127 12:11:56.806048       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:58.620914    9948 command_runner.go:130] ! E0127 12:11:56.806109       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.620938    9948 command_runner.go:130] ! W0127 12:11:56.846817       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0127 12:36:58.620989    9948 command_runner.go:130] ! E0127 12:11:56.847194       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.621027    9948 command_runner.go:130] ! W0127 12:11:56.871314       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0127 12:36:58.621027    9948 command_runner.go:130] ! E0127 12:11:56.872178       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.621027    9948 command_runner.go:130] ! W0127 12:11:56.887386       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0127 12:36:58.621027    9948 command_runner.go:130] ! E0127 12:11:56.887549       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.621027    9948 command_runner.go:130] ! W0127 12:11:56.918642       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0127 12:36:58.621027    9948 command_runner.go:130] ! E0127 12:11:56.919135       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.621027    9948 command_runner.go:130] ! W0127 12:11:57.039216       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:58.621027    9948 command_runner.go:130] ! E0127 12:11:57.039707       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.621027    9948 command_runner.go:130] ! W0127 12:11:57.055169       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0127 12:36:58.621027    9948 command_runner.go:130] ! E0127 12:11:57.055233       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.621027    9948 command_runner.go:130] ! W0127 12:11:57.106656       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0127 12:36:58.621027    9948 command_runner.go:130] ! E0127 12:11:57.106828       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.621027    9948 command_runner.go:130] ! W0127 12:11:57.214186       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:58.621027    9948 command_runner.go:130] ! E0127 12:11:57.214290       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.621027    9948 command_runner.go:130] ! W0127 12:11:57.298150       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0127 12:36:58.621027    9948 command_runner.go:130] ! E0127 12:11:57.298337       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.621551    9948 command_runner.go:130] ! W0127 12:11:57.310098       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0127 12:36:58.621625    9948 command_runner.go:130] ! E0127 12:11:57.310312       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.621625    9948 command_runner.go:130] ! W0127 12:11:57.312117       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:58.621736    9948 command_runner.go:130] ! E0127 12:11:57.312192       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.621736    9948 command_runner.go:130] ! W0127 12:11:57.321525       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0127 12:36:58.621804    9948 command_runner.go:130] ! E0127 12:11:57.321832       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.621804    9948 command_runner.go:130] ! I0127 12:11:59.701790       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:36:58.621804    9948 command_runner.go:130] ! I0127 12:33:15.443053       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0127 12:36:58.621878    9948 command_runner.go:130] ! I0127 12:33:15.443143       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0127 12:36:58.621905    9948 command_runner.go:130] ! I0127 12:33:15.452458       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 12:36:58.621933    9948 command_runner.go:130] ! E0127 12:33:15.487412       1 run.go:72] "command failed" err="finished without leader elect"
	I0127 12:36:58.632177    9948 logs.go:123] Gathering logs for kindnet [373bec67270f] ...
	I0127 12:36:58.632177    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 373bec67270f"
	I0127 12:36:58.657220    9948 command_runner.go:130] ! I0127 12:35:44.464092       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0127 12:36:58.657220    9948 command_runner.go:130] ! I0127 12:35:44.489651       1 main.go:139] hostIP = 172.29.198.106
	I0127 12:36:58.657220    9948 command_runner.go:130] ! podIP = 172.29.198.106
	I0127 12:36:58.657514    9948 command_runner.go:130] ! I0127 12:35:44.489794       1 main.go:148] setting mtu 1500 for CNI 
	I0127 12:36:58.657514    9948 command_runner.go:130] ! I0127 12:35:44.489865       1 main.go:178] kindnetd IP family: "ipv4"
	I0127 12:36:58.657514    9948 command_runner.go:130] ! I0127 12:35:44.490024       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0127 12:36:58.657514    9948 command_runner.go:130] ! I0127 12:35:45.397363       1 main.go:239] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-40: Error: Could not process rule: Operation not supported
	I0127 12:36:58.657514    9948 command_runner.go:130] ! add table inet kindnet-network-policies
	I0127 12:36:58.657514    9948 command_runner.go:130] ! ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:58.657601    9948 command_runner.go:130] ! , skipping network policies
	I0127 12:36:58.657630    9948 command_runner.go:130] ! W0127 12:36:15.407551       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0127 12:36:58.657630    9948 command_runner.go:130] ! E0127 12:36:15.407870       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError"
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:25.405793       1 main.go:297] Handling node with IPs: map[172.29.198.106:{}]
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:25.405967       1 main.go:301] handling current node
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:25.406822       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:25.406903       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:25.408014       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.29.199.129 Flags: [] Table: 0 Realm: 0} 
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:25.408956       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:25.409055       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:25.409321       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.29.206.88 Flags: [] Table: 0 Realm: 0} 
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:35.400986       1 main.go:297] Handling node with IPs: map[172.29.198.106:{}]
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:35.401115       1 main.go:301] handling current node
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:35.401203       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:35.401377       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:35.401789       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:35.401927       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:45.400837       1 main.go:297] Handling node with IPs: map[172.29.198.106:{}]
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:45.401002       1 main.go:301] handling current node
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:45.401061       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:45.401072       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:45.401385       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:45.401462       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:55.406998       1 main.go:297] Handling node with IPs: map[172.29.198.106:{}]
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:55.407153       1 main.go:301] handling current node
	I0127 12:36:58.658201    9948 command_runner.go:130] ! I0127 12:36:55.407182       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.658201    9948 command_runner.go:130] ! I0127 12:36:55.407192       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.658201    9948 command_runner.go:130] ! I0127 12:36:55.407535       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.658201    9948 command_runner.go:130] ! I0127 12:36:55.407746       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.661181    9948 logs.go:123] Gathering logs for container status ...
	I0127 12:36:58.661181    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:36:58.721183    9948 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0127 12:36:58.721183    9948 command_runner.go:130] > 528243cca8bfb       8c811b4aec35f                                                                                         11 seconds ago       Running             busybox                   1                   ef504f99724cb       busybox-58667487b6-2jq9j
	I0127 12:36:58.721183    9948 command_runner.go:130] > b3a9ed6e130c0       c69fa2e9cbf5f                                                                                         11 seconds ago       Running             coredns                   1                   6b22dbb5ef3e0       coredns-668d6bf9bc-2qw6w
	I0127 12:36:58.721183    9948 command_runner.go:130] > 389606c183b19       6e38f40d628db                                                                                         31 seconds ago       Running             storage-provisioner       2                   b613e9a7a3565       storage-provisioner
	I0127 12:36:58.721183    9948 command_runner.go:130] > 373bec67270fb       50415e5d05f05                                                                                         About a minute ago   Running             kindnet-cni               1                   d43e4cc62e087       kindnet-z2hqq
	I0127 12:36:58.721183    9948 command_runner.go:130] > 9b2db1d0cb61c       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   b613e9a7a3565       storage-provisioner
	I0127 12:36:58.721183    9948 command_runner.go:130] > 0283b35dee3cc       e29f9c7391fd9                                                                                         About a minute ago   Running             kube-proxy                1                   34d579bb511fe       kube-proxy-s46mv
	I0127 12:36:58.721183    9948 command_runner.go:130] > ea993630a3109       95c0bda56fc4d                                                                                         About a minute ago   Running             kube-apiserver            0                   5601285bb260a       kube-apiserver-multinode-659000
	I0127 12:36:58.721183    9948 command_runner.go:130] > 0ef2a3b50bae8       a9e7e6b294baf                                                                                         About a minute ago   Running             etcd                      0                   cdf534e99b2bb       etcd-multinode-659000
	I0127 12:36:58.721183    9948 command_runner.go:130] > ed51c7eaa9666       2b0d6572d062c                                                                                         About a minute ago   Running             kube-scheduler            1                   910315897d842       kube-scheduler-multinode-659000
	I0127 12:36:58.721183    9948 command_runner.go:130] > 8d4872cda28de       019ee182b58e2                                                                                         About a minute ago   Running             kube-controller-manager   1                   b770a357d9830       kube-controller-manager-multinode-659000
	I0127 12:36:58.721183    9948 command_runner.go:130] > 998a64b2baa2d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   4c82c0ec4aeaa       busybox-58667487b6-2jq9j
	I0127 12:36:58.721183    9948 command_runner.go:130] > f818dd15d8b02       c69fa2e9cbf5f                                                                                         24 minutes ago       Exited              coredns                   0                   4a53e133a1cd6       coredns-668d6bf9bc-2qw6w
	I0127 12:36:58.721183    9948 command_runner.go:130] > d758000dda95d       kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108              24 minutes ago       Exited              kindnet-cni               0                   f2d0bd65fe50d       kindnet-z2hqq
	I0127 12:36:58.721183    9948 command_runner.go:130] > bbec7ccef7da5       e29f9c7391fd9                                                                                         24 minutes ago       Exited              kube-proxy                0                   319cddeebceb6       kube-proxy-s46mv
	I0127 12:36:58.721183    9948 command_runner.go:130] > a16e06a038601       2b0d6572d062c                                                                                         25 minutes ago       Exited              kube-scheduler            0                   5423fc5113290       kube-scheduler-multinode-659000
	I0127 12:36:58.721183    9948 command_runner.go:130] > e07a66f8f6196       019ee182b58e2                                                                                         25 minutes ago       Exited              kube-controller-manager   0                   1bd5bf99bede3       kube-controller-manager-multinode-659000
	I0127 12:37:01.224548    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods
	I0127 12:37:01.224548    9948 round_trippers.go:469] Request Headers:
	I0127 12:37:01.224548    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:37:01.224548    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:37:01.235963    9948 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0127 12:37:01.235963    9948 round_trippers.go:577] Response Headers:
	I0127 12:37:01.235963    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:37:01.235963    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:37:01 GMT
	I0127 12:37:01.235963    9948 round_trippers.go:580]     Audit-Id: ea5a0f6d-fc63-43ff-bbfd-7fc2ef1e13dd
	I0127 12:37:01.235963    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:37:01.235963    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:37:01.235963    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:37:01.238848    9948 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2041"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"2024","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90382 chars]
	I0127 12:37:01.243235    9948 system_pods.go:59] 12 kube-system pods found
	I0127 12:37:01.243269    9948 system_pods.go:61] "coredns-668d6bf9bc-2qw6w" [8f0367fc-d842-4cc3-8e71-30869a548243] Running
	I0127 12:37:01.243269    9948 system_pods.go:61] "etcd-multinode-659000" [4c33fa42-51a7-4a7a-a497-cce80b8773d6] Running
	I0127 12:37:01.243269    9948 system_pods.go:61] "kindnet-kpfjt" [b00e6ead-b072-40b5-9c87-7697316d8107] Running
	I0127 12:37:01.243269    9948 system_pods.go:61] "kindnet-n7vjl" [23617db6-b970-4ead-845b-69776d50ffef] Running
	I0127 12:37:01.243269    9948 system_pods.go:61] "kindnet-z2hqq" [9b617a9c-e2b8-45fd-bee2-45cb03d4cd42] Running
	I0127 12:37:01.243269    9948 system_pods.go:61] "kube-apiserver-multinode-659000" [8fbee94f-fd8f-4431-bd9f-b75d49cb19d4] Running
	I0127 12:37:01.243269    9948 system_pods.go:61] "kube-controller-manager-multinode-659000" [8be02f36-161c-44f3-b526-56db3b8a007a] Running
	I0127 12:37:01.243269    9948 system_pods.go:61] "kube-proxy-pjhc8" [ddb6698c-b83d-4a49-9672-c894e87cbb66] Running
	I0127 12:37:01.243269    9948 system_pods.go:61] "kube-proxy-s46mv" [ae3b8daf-d674-4cfe-8652-cb5ff6ba8615] Running
	I0127 12:37:01.243269    9948 system_pods.go:61] "kube-proxy-sk5js" [ba679e1d-713c-4bd4-b267-2b887c1ac4df] Running
	I0127 12:37:01.243444    9948 system_pods.go:61] "kube-scheduler-multinode-659000" [52b91964-a331-4925-9e07-c8df32b4176d] Running
	I0127 12:37:01.243444    9948 system_pods.go:61] "storage-provisioner" [bcfd7913-1bc0-4c24-882f-2be92ec9b046] Running
	I0127 12:37:01.243475    9948 system_pods.go:74] duration metric: took 3.7146405s to wait for pod list to return data ...
	I0127 12:37:01.243475    9948 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:37:01.243680    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/default/serviceaccounts
	I0127 12:37:01.243722    9948 round_trippers.go:469] Request Headers:
	I0127 12:37:01.243722    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:37:01.243722    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:37:01.247432    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:37:01.247432    9948 round_trippers.go:577] Response Headers:
	I0127 12:37:01.247432    9948 round_trippers.go:580]     Audit-Id: 747ffff5-82fc-4ca7-b092-f9df2bbbeae0
	I0127 12:37:01.248162    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:37:01.248162    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:37:01.248162    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:37:01.248162    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:37:01.248162    9948 round_trippers.go:580]     Content-Length: 262
	I0127 12:37:01.248162    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:37:01 GMT
	I0127 12:37:01.248210    9948 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"2041"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"bff364bd-d78f-41e4-90bc-c2009fb4813f","resourceVersion":"328","creationTimestamp":"2025-01-27T12:12:03Z"}}]}
	I0127 12:37:01.248428    9948 default_sa.go:45] found service account: "default"
	I0127 12:37:01.248428    9948 default_sa.go:55] duration metric: took 4.9532ms for default service account to be created ...
	I0127 12:37:01.248428    9948 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 12:37:01.248428    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods
	I0127 12:37:01.248428    9948 round_trippers.go:469] Request Headers:
	I0127 12:37:01.248428    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:37:01.248428    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:37:01.256117    9948 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 12:37:01.256117    9948 round_trippers.go:577] Response Headers:
	I0127 12:37:01.256186    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:37:01.256186    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:37:01.256186    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:37:01.256221    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:37:01 GMT
	I0127 12:37:01.256221    9948 round_trippers.go:580]     Audit-Id: 71aafcdd-9018-43fc-bcd3-215a8cc752ff
	I0127 12:37:01.256221    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:37:01.257749    9948 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2041"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"2024","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90382 chars]
	I0127 12:37:01.262220    9948 system_pods.go:87] 12 kube-system pods found
	I0127 12:37:01.262411    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-dns
	I0127 12:37:01.262442    9948 round_trippers.go:469] Request Headers:
	I0127 12:37:01.262442    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:37:01.262442    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:37:01.265501    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:37:01.265588    9948 round_trippers.go:577] Response Headers:
	I0127 12:37:01.265588    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:37:01 GMT
	I0127 12:37:01.265622    9948 round_trippers.go:580]     Audit-Id: 5d6e4d9b-4f40-48c1-8fbb-5d22b550192a
	I0127 12:37:01.265622    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:37:01.265622    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:37:01.265622    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:37:01.265622    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:37:01.265933    9948 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2041"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"2024","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 7100 chars]
	I0127 12:37:01.266670    9948 system_pods.go:105] "coredns-668d6bf9bc-2qw6w" [8f0367fc-d842-4cc3-8e71-30869a548243] Running
	I0127 12:37:01.266670    9948 system_pods.go:105] "etcd-multinode-659000" [4c33fa42-51a7-4a7a-a497-cce80b8773d6] Running
	I0127 12:37:01.266716    9948 system_pods.go:105] "kindnet-kpfjt" [b00e6ead-b072-40b5-9c87-7697316d8107] Running
	I0127 12:37:01.266716    9948 system_pods.go:105] "kindnet-n7vjl" [23617db6-b970-4ead-845b-69776d50ffef] Running
	I0127 12:37:01.266716    9948 system_pods.go:105] "kindnet-z2hqq" [9b617a9c-e2b8-45fd-bee2-45cb03d4cd42] Running
	I0127 12:37:01.266744    9948 system_pods.go:105] "kube-apiserver-multinode-659000" [8fbee94f-fd8f-4431-bd9f-b75d49cb19d4] Running
	I0127 12:37:01.266744    9948 system_pods.go:105] "kube-controller-manager-multinode-659000" [8be02f36-161c-44f3-b526-56db3b8a007a] Running
	I0127 12:37:01.266744    9948 system_pods.go:105] "kube-proxy-pjhc8" [ddb6698c-b83d-4a49-9672-c894e87cbb66] Running
	I0127 12:37:01.266744    9948 system_pods.go:105] "kube-proxy-s46mv" [ae3b8daf-d674-4cfe-8652-cb5ff6ba8615] Running
	I0127 12:37:01.266744    9948 system_pods.go:105] "kube-proxy-sk5js" [ba679e1d-713c-4bd4-b267-2b887c1ac4df] Running
	I0127 12:37:01.266790    9948 system_pods.go:105] "kube-scheduler-multinode-659000" [52b91964-a331-4925-9e07-c8df32b4176d] Running
	I0127 12:37:01.266820    9948 system_pods.go:105] "storage-provisioner" [bcfd7913-1bc0-4c24-882f-2be92ec9b046] Running
	I0127 12:37:01.266820    9948 system_pods.go:147] duration metric: took 18.3916ms to wait for k8s-apps to be running ...
	I0127 12:37:01.266820    9948 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 12:37:01.280154    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:37:01.306558    9948 system_svc.go:56] duration metric: took 39.7375ms WaitForService to wait for kubelet
	I0127 12:37:01.306558    9948 kubeadm.go:582] duration metric: took 1m14.3271236s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:37:01.306558    9948 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:37:01.306558    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes
	I0127 12:37:01.306558    9948 round_trippers.go:469] Request Headers:
	I0127 12:37:01.306558    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:37:01.306558    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:37:01.312166    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:37:01.312166    9948 round_trippers.go:577] Response Headers:
	I0127 12:37:01.312166    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:37:01.312166    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:37:01.312166    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:37:01.312166    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:37:01 GMT
	I0127 12:37:01.312166    9948 round_trippers.go:580]     Audit-Id: 6d477784-cfda-4482-9fcd-64a22c4afb4e
	I0127 12:37:01.312296    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:37:01.312578    9948 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2041"},"items":[{"metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16260 chars]
	I0127 12:37:01.313729    9948 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:37:01.313756    9948 node_conditions.go:123] node cpu capacity is 2
	I0127 12:37:01.313756    9948 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:37:01.313823    9948 node_conditions.go:123] node cpu capacity is 2
	I0127 12:37:01.313823    9948 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:37:01.313823    9948 node_conditions.go:123] node cpu capacity is 2
	I0127 12:37:01.313852    9948 node_conditions.go:105] duration metric: took 7.2648ms to run NodePressure ...
	I0127 12:37:01.313852    9948 start.go:241] waiting for startup goroutines ...
	I0127 12:37:01.313852    9948 start.go:246] waiting for cluster config update ...
	I0127 12:37:01.313852    9948 start.go:255] writing updated cluster config ...
	I0127 12:37:01.317889    9948 out.go:201] 
	I0127 12:37:01.321567    9948 config.go:182] Loaded profile config "ha-011400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:37:01.338316    9948 config.go:182] Loaded profile config "multinode-659000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:37:01.338535    9948 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\config.json ...
	I0127 12:37:01.344930    9948 out.go:177] * Starting "multinode-659000-m02" worker node in "multinode-659000" cluster
	I0127 12:37:01.347220    9948 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0127 12:37:01.347389    9948 cache.go:56] Caching tarball of preloaded images
	I0127 12:37:01.347389    9948 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 12:37:01.347389    9948 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0127 12:37:01.347389    9948 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\config.json ...
	I0127 12:37:01.350997    9948 start.go:360] acquireMachinesLock for multinode-659000-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:37:01.351226    9948 start.go:364] duration metric: took 179.4µs to acquireMachinesLock for "multinode-659000-m02"
	I0127 12:37:01.351449    9948 start.go:96] Skipping create...Using existing machine configuration
	I0127 12:37:01.351449    9948 fix.go:54] fixHost starting: m02
	I0127 12:37:01.351580    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:37:03.479216    9948 main.go:141] libmachine: [stdout =====>] : Off
	
	I0127 12:37:03.479216    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:03.479334    9948 fix.go:112] recreateIfNeeded on multinode-659000-m02: state=Stopped err=<nil>
	W0127 12:37:03.479334    9948 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 12:37:03.484947    9948 out.go:177] * Restarting existing hyperv VM for "multinode-659000-m02" ...
	I0127 12:37:03.488036    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-659000-m02
	I0127 12:37:06.597718    9948 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:37:06.598392    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:06.598392    9948 main.go:141] libmachine: Waiting for host to start...
	I0127 12:37:06.598473    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:37:08.996533    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:37:08.996533    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:08.996845    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:37:11.538823    9948 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:37:11.538823    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:12.539348    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:37:14.775891    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:37:14.775891    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:14.775891    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:37:17.301803    9948 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:37:17.302472    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:18.302890    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:37:20.462482    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:37:20.463226    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:20.463292    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:37:22.951261    9948 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:37:22.951261    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:23.951341    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:37:26.166650    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:37:26.167547    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:26.167547    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:37:28.727343    9948 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:37:28.727382    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:29.728069    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:37:31.989341    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:37:31.990155    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:31.990229    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:37:34.561762    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:37:34.561762    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:34.565772    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:37:36.707428    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:37:36.707428    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:36.707428    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:37:39.369556    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:37:39.369556    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:39.369556    9948 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\config.json ...
	I0127 12:37:39.374611    9948 machine.go:93] provisionDockerMachine start ...
	I0127 12:37:39.374611    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:37:41.713498    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:37:41.713498    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:41.713859    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:37:44.375957    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:37:44.375957    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:44.381823    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:37:44.381961    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.205.217 22 <nil> <nil>}
	I0127 12:37:44.381961    9948 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 12:37:44.519223    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 12:37:44.519223    9948 buildroot.go:166] provisioning hostname "multinode-659000-m02"
	I0127 12:37:44.519378    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:37:46.737401    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:37:46.737735    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:46.737735    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:37:49.413255    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:37:49.413255    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:49.419714    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:37:49.420455    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.205.217 22 <nil> <nil>}
	I0127 12:37:49.420455    9948 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-659000-m02 && echo "multinode-659000-m02" | sudo tee /etc/hostname
	I0127 12:37:49.586768    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-659000-m02
	
	I0127 12:37:49.586768    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:37:51.773746    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:37:51.773746    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:51.774730    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:37:54.292118    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:37:54.292118    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:54.301229    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:37:54.301229    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.205.217 22 <nil> <nil>}
	I0127 12:37:54.301229    9948 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-659000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-659000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-659000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:37:54.457982    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:37:54.458065    9948 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0127 12:37:54.458065    9948 buildroot.go:174] setting up certificates
	I0127 12:37:54.458065    9948 provision.go:84] configureAuth start
	I0127 12:37:54.458198    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:37:56.616418    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:37:56.616573    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:56.616731    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:37:59.164609    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:37:59.164609    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:59.164609    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:38:01.397284    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:38:01.397528    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:01.397528    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:38:03.969402    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:38:03.969402    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:03.969402    9948 provision.go:143] copyHostCerts
	I0127 12:38:03.970066    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0127 12:38:03.970066    9948 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0127 12:38:03.970066    9948 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0127 12:38:03.970851    9948 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0127 12:38:03.971760    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0127 12:38:03.972442    9948 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0127 12:38:03.972442    9948 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0127 12:38:03.972442    9948 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0127 12:38:03.973604    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0127 12:38:03.974202    9948 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0127 12:38:03.974202    9948 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0127 12:38:03.974299    9948 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0127 12:38:03.975577    9948 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-659000-m02 san=[127.0.0.1 172.29.205.217 localhost minikube multinode-659000-m02]
	I0127 12:38:04.272193    9948 provision.go:177] copyRemoteCerts
	I0127 12:38:04.284343    9948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:38:04.284480    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:38:06.398935    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:38:06.398935    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:06.398935    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:38:08.938160    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:38:08.938160    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:08.938160    9948 sshutil.go:53] new ssh client: &{IP:172.29.205.217 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000-m02\id_rsa Username:docker}
	I0127 12:38:09.047327    9948 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7629345s)
	I0127 12:38:09.047471    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0127 12:38:09.047635    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 12:38:09.091832    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0127 12:38:09.092400    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0127 12:38:09.135988    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0127 12:38:09.136556    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 12:38:09.182090    9948 provision.go:87] duration metric: took 14.7238709s to configureAuth
	I0127 12:38:09.182090    9948 buildroot.go:189] setting minikube options for container-runtime
	I0127 12:38:09.182980    9948 config.go:182] Loaded profile config "multinode-659000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:38:09.183073    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:38:11.290925    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:38:11.291092    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:11.291227    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:38:13.823814    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:38:13.824910    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:13.830209    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:38:13.830793    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.205.217 22 <nil> <nil>}
	I0127 12:38:13.830793    9948 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0127 12:38:13.961510    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0127 12:38:13.961510    9948 buildroot.go:70] root file system type: tmpfs
	I0127 12:38:13.961754    9948 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0127 12:38:13.961754    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:38:16.107691    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:38:16.108080    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:16.108080    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:38:18.643315    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:38:18.643315    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:18.650637    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:38:18.650637    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.205.217 22 <nil> <nil>}
	I0127 12:38:18.651300    9948 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.29.198.106"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0127 12:38:18.797236    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.29.198.106
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0127 12:38:18.797785    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:38:20.929373    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:38:20.929373    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:20.930095    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:38:23.476272    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:38:23.476272    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:23.481591    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:38:23.481701    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.205.217 22 <nil> <nil>}
	I0127 12:38:23.481701    9948 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0127 12:38:25.865024    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0127 12:38:25.865063    9948 machine.go:96] duration metric: took 46.4899639s to provisionDockerMachine
	I0127 12:38:25.865063    9948 start.go:293] postStartSetup for "multinode-659000-m02" (driver="hyperv")
	I0127 12:38:25.865063    9948 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:38:25.877709    9948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:38:25.877709    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:38:27.997441    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:38:27.997441    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:27.997944    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:38:30.548758    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:38:30.548986    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:30.549171    9948 sshutil.go:53] new ssh client: &{IP:172.29.205.217 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000-m02\id_rsa Username:docker}
	I0127 12:38:30.648489    9948 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7707305s)
	I0127 12:38:30.661598    9948 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:38:30.667383    9948 command_runner.go:130] > NAME=Buildroot
	I0127 12:38:30.667383    9948 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0127 12:38:30.667383    9948 command_runner.go:130] > ID=buildroot
	I0127 12:38:30.667383    9948 command_runner.go:130] > VERSION_ID=2023.02.9
	I0127 12:38:30.667383    9948 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0127 12:38:30.667383    9948 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 12:38:30.667383    9948 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0127 12:38:30.668914    9948 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0127 12:38:30.669660    9948 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> 59562.pem in /etc/ssl/certs
	I0127 12:38:30.669660    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> /etc/ssl/certs/59562.pem
	I0127 12:38:30.680271    9948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:38:30.702550    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem --> /etc/ssl/certs/59562.pem (1708 bytes)
	I0127 12:38:30.754344    9948 start.go:296] duration metric: took 4.8892295s for postStartSetup
	I0127 12:38:30.754459    9948 fix.go:56] duration metric: took 1m29.402072s for fixHost
	I0127 12:38:30.754615    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:38:32.911209    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:38:32.911209    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:32.912200    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:38:35.470420    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:38:35.470420    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:35.475800    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:38:35.476512    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.205.217 22 <nil> <nil>}
	I0127 12:38:35.476512    9948 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 12:38:35.610220    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737981515.621558141
	
	I0127 12:38:35.610355    9948 fix.go:216] guest clock: 1737981515.621558141
	I0127 12:38:35.610355    9948 fix.go:229] Guest: 2025-01-27 12:38:35.621558141 +0000 UTC Remote: 2025-01-27 12:38:30.7545355 +0000 UTC m=+294.660634101 (delta=4.867022641s)
	I0127 12:38:35.610473    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:38:37.767540    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:38:37.768644    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:37.768726    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:38:40.287970    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:38:40.288485    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:40.294123    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:38:40.294123    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.205.217 22 <nil> <nil>}
	I0127 12:38:40.294667    9948 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1737981515
	I0127 12:38:40.430345    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 27 12:38:35 UTC 2025
	
	I0127 12:38:40.430345    9948 fix.go:236] clock set: Mon Jan 27 12:38:35 UTC 2025
	 (err=<nil>)
	I0127 12:38:40.430345    9948 start.go:83] releasing machines lock for "multinode-659000-m02", held for 1m39.0780099s
	I0127 12:38:40.430345    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:38:42.591115    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:38:42.591115    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:42.591115    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:38:45.140199    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:38:45.140878    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:45.146078    9948 out.go:177] * Found network options:
	I0127 12:38:45.149047    9948 out.go:177]   - NO_PROXY=172.29.198.106
	W0127 12:38:45.151690    9948 proxy.go:119] fail to check proxy env: Error ip not in block
	I0127 12:38:45.153974    9948 out.go:177]   - NO_PROXY=172.29.198.106
	W0127 12:38:45.156167    9948 proxy.go:119] fail to check proxy env: Error ip not in block
	W0127 12:38:45.157748    9948 proxy.go:119] fail to check proxy env: Error ip not in block
	I0127 12:38:45.159018    9948 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0127 12:38:45.160039    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:38:45.168117    9948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0127 12:38:45.169133    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:38:47.364339    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:38:47.364443    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:47.364443    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:38:47.416800    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:38:47.416887    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:47.416967    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:38:50.043339    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:38:50.043401    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:50.044030    9948 sshutil.go:53] new ssh client: &{IP:172.29.205.217 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000-m02\id_rsa Username:docker}
	I0127 12:38:50.101920    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:38:50.101920    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:50.103390    9948 sshutil.go:53] new ssh client: &{IP:172.29.205.217 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000-m02\id_rsa Username:docker}
	I0127 12:38:50.158952    9948 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0127 12:38:50.159009    9948 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9908394s)
	W0127 12:38:50.159009    9948 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 12:38:50.170758    9948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:38:50.175167    9948 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0127 12:38:50.175716    9948 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0166459s)
	W0127 12:38:50.175716    9948 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0127 12:38:50.206835    9948 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0127 12:38:50.206835    9948 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 12:38:50.206835    9948 start.go:495] detecting cgroup driver to use...
	I0127 12:38:50.206835    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:38:50.240717    9948 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0127 12:38:50.253417    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 12:38:50.284893    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0127 12:38:50.292860    9948 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0127 12:38:50.292860    9948 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0127 12:38:50.309268    9948 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 12:38:50.319809    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 12:38:50.355076    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:38:50.384436    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 12:38:50.415801    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:38:50.448665    9948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:38:50.483387    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 12:38:50.514794    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 12:38:50.545169    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 12:38:50.574956    9948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:38:50.593431    9948 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:38:50.593955    9948 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:38:50.605551    9948 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 12:38:50.645521    9948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:38:50.673465    9948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:38:50.887181    9948 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 12:38:50.919208    9948 start.go:495] detecting cgroup driver to use...
	I0127 12:38:50.932391    9948 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0127 12:38:50.956678    9948 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0127 12:38:50.956777    9948 command_runner.go:130] > [Unit]
	I0127 12:38:50.956777    9948 command_runner.go:130] > Description=Docker Application Container Engine
	I0127 12:38:50.956777    9948 command_runner.go:130] > Documentation=https://docs.docker.com
	I0127 12:38:50.956777    9948 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0127 12:38:50.956945    9948 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0127 12:38:50.957029    9948 command_runner.go:130] > StartLimitBurst=3
	I0127 12:38:50.957067    9948 command_runner.go:130] > StartLimitIntervalSec=60
	I0127 12:38:50.957067    9948 command_runner.go:130] > [Service]
	I0127 12:38:50.957067    9948 command_runner.go:130] > Type=notify
	I0127 12:38:50.957067    9948 command_runner.go:130] > Restart=on-failure
	I0127 12:38:50.957067    9948 command_runner.go:130] > Environment=NO_PROXY=172.29.198.106
	I0127 12:38:50.957067    9948 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0127 12:38:50.957067    9948 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0127 12:38:50.957067    9948 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0127 12:38:50.957067    9948 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0127 12:38:50.957067    9948 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0127 12:38:50.957067    9948 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0127 12:38:50.957067    9948 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0127 12:38:50.957067    9948 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0127 12:38:50.957067    9948 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0127 12:38:50.957067    9948 command_runner.go:130] > ExecStart=
	I0127 12:38:50.957067    9948 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0127 12:38:50.957067    9948 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0127 12:38:50.957067    9948 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0127 12:38:50.957067    9948 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0127 12:38:50.957067    9948 command_runner.go:130] > LimitNOFILE=infinity
	I0127 12:38:50.957067    9948 command_runner.go:130] > LimitNPROC=infinity
	I0127 12:38:50.957067    9948 command_runner.go:130] > LimitCORE=infinity
	I0127 12:38:50.957067    9948 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0127 12:38:50.957067    9948 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0127 12:38:50.957067    9948 command_runner.go:130] > TasksMax=infinity
	I0127 12:38:50.957067    9948 command_runner.go:130] > TimeoutStartSec=0
	I0127 12:38:50.957067    9948 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0127 12:38:50.957067    9948 command_runner.go:130] > Delegate=yes
	I0127 12:38:50.957067    9948 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0127 12:38:50.957067    9948 command_runner.go:130] > KillMode=process
	I0127 12:38:50.957633    9948 command_runner.go:130] > [Install]
	I0127 12:38:50.957633    9948 command_runner.go:130] > WantedBy=multi-user.target
	I0127 12:38:50.971827    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 12:38:51.002521    9948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 12:38:51.038807    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 12:38:51.077125    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 12:38:51.114316    9948 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 12:38:51.182797    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 12:38:51.206990    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:38:51.241224    9948 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0127 12:38:51.257584    9948 ssh_runner.go:195] Run: which cri-dockerd
	I0127 12:38:51.264066    9948 command_runner.go:130] > /usr/bin/cri-dockerd
	I0127 12:38:51.274320    9948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0127 12:38:51.293277    9948 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0127 12:38:51.334990    9948 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0127 12:38:51.546606    9948 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0127 12:38:51.735800    9948 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0127 12:38:51.735800    9948 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0127 12:38:51.784327    9948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:38:51.995576    9948 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0127 12:38:54.710260    9948 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.7146552s)
	I0127 12:38:54.722678    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0127 12:38:54.759442    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0127 12:38:54.798264    9948 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0127 12:38:55.003157    9948 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0127 12:38:55.224965    9948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:38:55.426670    9948 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0127 12:38:55.467158    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0127 12:38:55.502305    9948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:38:55.692077    9948 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0127 12:38:55.806274    9948 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0127 12:38:55.819446    9948 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0127 12:38:55.829805    9948 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0127 12:38:55.830810    9948 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0127 12:38:55.830810    9948 command_runner.go:130] > Device: 0,22	Inode: 851         Links: 1
	I0127 12:38:55.830810    9948 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0127 12:38:55.830810    9948 command_runner.go:130] > Access: 2025-01-27 12:38:55.727897412 +0000
	I0127 12:38:55.830810    9948 command_runner.go:130] > Modify: 2025-01-27 12:38:55.727897412 +0000
	I0127 12:38:55.830810    9948 command_runner.go:130] > Change: 2025-01-27 12:38:55.731897417 +0000
	I0127 12:38:55.830810    9948 command_runner.go:130] >  Birth: -
	I0127 12:38:55.831138    9948 start.go:563] Will wait 60s for crictl version
	I0127 12:38:55.841369    9948 ssh_runner.go:195] Run: which crictl
	I0127 12:38:55.847897    9948 command_runner.go:130] > /usr/bin/crictl
	I0127 12:38:55.858852    9948 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:38:55.915128    9948 command_runner.go:130] > Version:  0.1.0
	I0127 12:38:55.915128    9948 command_runner.go:130] > RuntimeName:  docker
	I0127 12:38:55.915221    9948 command_runner.go:130] > RuntimeVersion:  27.4.0
	I0127 12:38:55.915221    9948 command_runner.go:130] > RuntimeApiVersion:  v1
	I0127 12:38:55.915221    9948 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0127 12:38:55.924283    9948 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 12:38:55.966103    9948 command_runner.go:130] > 27.4.0
	I0127 12:38:55.976145    9948 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 12:38:56.015095    9948 command_runner.go:130] > 27.4.0
	I0127 12:38:56.021045    9948 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0127 12:38:56.023552    9948 out.go:177]   - env NO_PROXY=172.29.198.106
	I0127 12:38:56.025630    9948 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0127 12:38:56.029967    9948 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0127 12:38:56.030978    9948 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0127 12:38:56.030978    9948 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0127 12:38:56.030978    9948 ip.go:211] Found interface: {Index:17 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:43:05:a6 Flags:up|broadcast|multicast|running}
	I0127 12:38:56.033049    9948 ip.go:214] interface addr: fe80::8ceb:a58b:811a:7c79/64
	I0127 12:38:56.033049    9948 ip.go:214] interface addr: 172.29.192.1/20
	I0127 12:38:56.050374    9948 ssh_runner.go:195] Run: grep 172.29.192.1	host.minikube.internal$ /etc/hosts
	I0127 12:38:56.057348    9948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:38:56.081175    9948 mustload.go:65] Loading cluster: multinode-659000
	I0127 12:38:56.082007    9948 config.go:182] Loaded profile config "multinode-659000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:38:56.082324    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:38:58.291556    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:38:58.291724    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:58.291724    9948 host.go:66] Checking if "multinode-659000" exists ...
	I0127 12:38:58.292529    9948 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000 for IP: 172.29.205.217
	I0127 12:38:58.292529    9948 certs.go:194] generating shared ca certs ...
	I0127 12:38:58.292529    9948 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:38:58.293366    9948 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0127 12:38:58.293366    9948 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0127 12:38:58.294074    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0127 12:38:58.294074    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0127 12:38:58.294074    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0127 12:38:58.294688    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0127 12:38:58.295176    9948 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem (1338 bytes)
	W0127 12:38:58.295540    9948 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956_empty.pem, impossibly tiny 0 bytes
	I0127 12:38:58.295703    9948 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0127 12:38:58.296212    9948 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0127 12:38:58.296487    9948 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0127 12:38:58.296487    9948 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0127 12:38:58.297343    9948 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem (1708 bytes)
	I0127 12:38:58.297533    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem -> /usr/share/ca-certificates/5956.pem
	I0127 12:38:58.297698    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> /usr/share/ca-certificates/59562.pem
	I0127 12:38:58.297698    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:38:58.297698    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 12:38:58.353052    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 12:38:58.403358    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 12:38:58.449888    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 12:38:58.496394    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem --> /usr/share/ca-certificates/5956.pem (1338 bytes)
	I0127 12:38:58.546231    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem --> /usr/share/ca-certificates/59562.pem (1708 bytes)
	I0127 12:38:58.589304    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 12:38:58.645314    9948 ssh_runner.go:195] Run: openssl version
	I0127 12:38:58.654553    9948 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0127 12:38:58.665466    9948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 12:38:58.695520    9948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:38:58.703112    9948 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 27 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:38:58.703209    9948 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:38:58.714153    9948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:38:58.722874    9948 command_runner.go:130] > b5213941
	I0127 12:38:58.734055    9948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 12:38:58.764789    9948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5956.pem && ln -fs /usr/share/ca-certificates/5956.pem /etc/ssl/certs/5956.pem"
	I0127 12:38:58.800772    9948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5956.pem
	I0127 12:38:58.808946    9948 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 27 10:52 /usr/share/ca-certificates/5956.pem
	I0127 12:38:58.808946    9948 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:52 /usr/share/ca-certificates/5956.pem
	I0127 12:38:58.820073    9948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5956.pem
	I0127 12:38:58.829441    9948 command_runner.go:130] > 51391683
	I0127 12:38:58.839559    9948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5956.pem /etc/ssl/certs/51391683.0"
	I0127 12:38:58.872690    9948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59562.pem && ln -fs /usr/share/ca-certificates/59562.pem /etc/ssl/certs/59562.pem"
	I0127 12:38:58.903678    9948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59562.pem
	I0127 12:38:58.910384    9948 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 27 10:52 /usr/share/ca-certificates/59562.pem
	I0127 12:38:58.910384    9948 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:52 /usr/share/ca-certificates/59562.pem
	I0127 12:38:58.920494    9948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59562.pem
	I0127 12:38:58.930277    9948 command_runner.go:130] > 3ec20f2e
	I0127 12:38:58.940879    9948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59562.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 12:38:58.989659    9948 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:38:58.995647    9948 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 12:38:58.996765    9948 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0127 12:38:58.997001    9948 kubeadm.go:934] updating node {m02 172.29.205.217 8443 v1.32.1 docker false true} ...
	I0127 12:38:58.997152    9948 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-659000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.205.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:multinode-659000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 12:38:59.006810    9948 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 12:38:59.027209    9948 command_runner.go:130] > kubeadm
	I0127 12:38:59.027325    9948 command_runner.go:130] > kubectl
	I0127 12:38:59.027325    9948 command_runner.go:130] > kubelet
	I0127 12:38:59.027526    9948 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 12:38:59.037647    9948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0127 12:38:59.060377    9948 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0127 12:38:59.091861    9948 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 12:38:59.132795    9948 ssh_runner.go:195] Run: grep 172.29.198.106	control-plane.minikube.internal$ /etc/hosts
	I0127 12:38:59.139049    9948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.198.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:38:59.172499    9948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:38:59.376112    9948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:38:59.409318    9948 host.go:66] Checking if "multinode-659000" exists ...
	I0127 12:38:59.410540    9948 start.go:317] joinCluster: &{Name:multinode-659000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-659000 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.198.106 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.205.217 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.29.206.88 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false i
stio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:38:59.410686    9948 start.go:330] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.29.205.217 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0127 12:38:59.410686    9948 host.go:66] Checking if "multinode-659000-m02" exists ...
	I0127 12:38:59.411446    9948 mustload.go:65] Loading cluster: multinode-659000
	I0127 12:38:59.412523    9948 config.go:182] Loaded profile config "multinode-659000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:38:59.412776    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-659000" : exit status 1
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-659000
multinode_test.go:331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p multinode-659000: context deadline exceeded (547.4µs)
multinode_test.go:333: failed to run node list. args "out/minikube-windows-amd64.exe node list -p multinode-659000" : context deadline exceeded
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-659000	172.29.204.17
multinode-659000-m02	172.29.199.129
multinode-659000-m03	172.29.206.88

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-659000 -n multinode-659000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-659000 -n multinode-659000: (12.3256383s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 logs -n 25: (11.6271665s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| cp      | multinode-659000 cp testdata\cp-test.txt                                                                                 | multinode-659000 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:24 UTC | 27 Jan 25 12:24 UTC |
	|         | multinode-659000-m02:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-659000 ssh -n                                                                                                  | multinode-659000 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:24 UTC | 27 Jan 25 12:24 UTC |
	|         | multinode-659000-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-659000 cp multinode-659000-m02:/home/docker/cp-test.txt                                                        | multinode-659000 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:24 UTC | 27 Jan 25 12:24 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4000448911\001\cp-test_multinode-659000-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-659000 ssh -n                                                                                                  | multinode-659000 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:24 UTC | 27 Jan 25 12:24 UTC |
	|         | multinode-659000-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-659000 cp multinode-659000-m02:/home/docker/cp-test.txt                                                        | multinode-659000 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:24 UTC | 27 Jan 25 12:24 UTC |
	|         | multinode-659000:/home/docker/cp-test_multinode-659000-m02_multinode-659000.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-659000 ssh -n                                                                                                  | multinode-659000 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:24 UTC | 27 Jan 25 12:25 UTC |
	|         | multinode-659000-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-659000 ssh -n multinode-659000 sudo cat                                                                        | multinode-659000 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:25 UTC | 27 Jan 25 12:25 UTC |
	|         | /home/docker/cp-test_multinode-659000-m02_multinode-659000.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-659000 cp multinode-659000-m02:/home/docker/cp-test.txt                                                        | multinode-659000 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:25 UTC | 27 Jan 25 12:25 UTC |
	|         | multinode-659000-m03:/home/docker/cp-test_multinode-659000-m02_multinode-659000-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-659000 ssh -n                                                                                                  | multinode-659000 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:25 UTC | 27 Jan 25 12:25 UTC |
	|         | multinode-659000-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-659000 ssh -n multinode-659000-m03 sudo cat                                                                    | multinode-659000 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:25 UTC | 27 Jan 25 12:25 UTC |
	|         | /home/docker/cp-test_multinode-659000-m02_multinode-659000-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-659000 cp testdata\cp-test.txt                                                                                 | multinode-659000 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:25 UTC | 27 Jan 25 12:25 UTC |
	|         | multinode-659000-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-659000 ssh -n                                                                                                  | multinode-659000 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:25 UTC | 27 Jan 25 12:26 UTC |
	|         | multinode-659000-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-659000 cp multinode-659000-m03:/home/docker/cp-test.txt                                                        | multinode-659000 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:26 UTC | 27 Jan 25 12:26 UTC |
	|         | C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4000448911\001\cp-test_multinode-659000-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-659000 ssh -n                                                                                                  | multinode-659000 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:26 UTC | 27 Jan 25 12:26 UTC |
	|         | multinode-659000-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-659000 cp multinode-659000-m03:/home/docker/cp-test.txt                                                        | multinode-659000 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:26 UTC | 27 Jan 25 12:26 UTC |
	|         | multinode-659000:/home/docker/cp-test_multinode-659000-m03_multinode-659000.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-659000 ssh -n                                                                                                  | multinode-659000 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:26 UTC | 27 Jan 25 12:26 UTC |
	|         | multinode-659000-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-659000 ssh -n multinode-659000 sudo cat                                                                        | multinode-659000 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:26 UTC | 27 Jan 25 12:26 UTC |
	|         | /home/docker/cp-test_multinode-659000-m03_multinode-659000.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-659000 cp multinode-659000-m03:/home/docker/cp-test.txt                                                        | multinode-659000 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:26 UTC | 27 Jan 25 12:27 UTC |
	|         | multinode-659000-m02:/home/docker/cp-test_multinode-659000-m03_multinode-659000-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-659000 ssh -n                                                                                                  | multinode-659000 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:27 UTC | 27 Jan 25 12:27 UTC |
	|         | multinode-659000-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-659000 ssh -n multinode-659000-m02 sudo cat                                                                    | multinode-659000 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:27 UTC | 27 Jan 25 12:27 UTC |
	|         | /home/docker/cp-test_multinode-659000-m03_multinode-659000-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-659000 node stop m03                                                                                           | multinode-659000 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:27 UTC | 27 Jan 25 12:27 UTC |
	| node    | multinode-659000 node start                                                                                              | multinode-659000 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:28 UTC | 27 Jan 25 12:31 UTC |
	|         | m03 -v=7 --alsologtostderr                                                                                               |                  |                   |         |                     |                     |
	| node    | list -p multinode-659000                                                                                                 | multinode-659000 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:32 UTC |                     |
	| stop    | -p multinode-659000                                                                                                      | multinode-659000 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:32 UTC | 27 Jan 25 12:33 UTC |
	| start   | -p multinode-659000                                                                                                      | multinode-659000 | minikube6\jenkins | v1.35.0 | 27 Jan 25 12:33 UTC |                     |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 12:33:36
	Running on machine: minikube6
	Binary: Built with gc go1.23.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 12:33:36.181635    9948 out.go:345] Setting OutFile to fd 1164 ...
	I0127 12:33:36.251813    9948 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:33:36.251813    9948 out.go:358] Setting ErrFile to fd 1144...
	I0127 12:33:36.251813    9948 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:33:36.274144    9948 out.go:352] Setting JSON to false
	I0127 12:33:36.277140    9948 start.go:129] hostinfo: {"hostname":"minikube6","uptime":444199,"bootTime":1737537016,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5371 Build 19045.5371","kernelVersion":"10.0.19045.5371 Build 19045.5371","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0127 12:33:36.277140    9948 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0127 12:33:36.351296    9948 out.go:177] * [multinode-659000] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	I0127 12:33:36.363777    9948 notify.go:220] Checking for updates...
	I0127 12:33:36.370556    9948 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0127 12:33:36.405648    9948 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 12:33:36.419948    9948 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0127 12:33:36.433252    9948 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 12:33:36.454182    9948 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 12:33:36.460900    9948 config.go:182] Loaded profile config "multinode-659000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:33:36.460900    9948 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 12:33:41.755333    9948 out.go:177] * Using the hyperv driver based on existing profile
	I0127 12:33:41.765467    9948 start.go:297] selected driver: hyperv
	I0127 12:33:41.765467    9948 start.go:901] validating driver "hyperv" against &{Name:multinode-659000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-659000 N
amespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.204.17 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.199.129 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.29.206.88 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fals
e istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:33:41.765467    9948 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0127 12:33:41.817079    9948 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:33:41.817079    9948 cni.go:84] Creating CNI manager for ""
	I0127 12:33:41.817079    9948 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0127 12:33:41.817658    9948 start.go:340] cluster config:
	{Name:multinode-659000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-659000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIS
erverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.204.17 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.199.129 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.29.206.88 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:f
alse logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:33:41.817803    9948 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 12:33:41.912388    9948 out.go:177] * Starting "multinode-659000" primary control-plane node in "multinode-659000" cluster
	I0127 12:33:41.917744    9948 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0127 12:33:41.918332    9948 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
	I0127 12:33:41.918332    9948 cache.go:56] Caching tarball of preloaded images
	I0127 12:33:41.918796    9948 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 12:33:41.918796    9948 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0127 12:33:41.919337    9948 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\config.json ...
	I0127 12:33:41.924139    9948 start.go:360] acquireMachinesLock for multinode-659000: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:33:41.924560    9948 start.go:364] duration metric: took 115.2µs to acquireMachinesLock for "multinode-659000"
	I0127 12:33:41.925668    9948 start.go:96] Skipping create...Using existing machine configuration
	I0127 12:33:41.925668    9948 fix.go:54] fixHost starting: 
	I0127 12:33:41.926312    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:33:44.612570    9948 main.go:141] libmachine: [stdout =====>] : Off
	
	I0127 12:33:44.612657    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:33:44.612657    9948 fix.go:112] recreateIfNeeded on multinode-659000: state=Stopped err=<nil>
	W0127 12:33:44.612657    9948 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 12:33:44.756131    9948 out.go:177] * Restarting existing hyperv VM for "multinode-659000" ...
	I0127 12:33:44.804183    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-659000
	I0127 12:33:47.819240    9948 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:33:47.820017    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:33:47.820017    9948 main.go:141] libmachine: Waiting for host to start...
	I0127 12:33:47.820073    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:33:49.973733    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:33:49.973733    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:33:49.973733    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:33:52.378547    9948 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:33:52.378547    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:33:53.380366    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:33:55.489072    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:33:55.489889    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:33:55.489975    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:33:57.988771    9948 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:33:57.988771    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:33:58.988973    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:34:01.184614    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:34:01.184614    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:01.184614    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:34:03.677566    9948 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:34:03.677662    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:04.677924    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:34:06.826044    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:34:06.826044    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:06.826140    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:34:09.249700    9948 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:34:09.249700    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:10.251014    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:34:12.403029    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:34:12.403029    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:12.403319    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:34:14.858430    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:34:14.858430    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:14.861484    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:34:16.940482    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:34:16.940482    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:16.940482    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:34:19.405200    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:34:19.405200    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:19.405200    9948 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\config.json ...
	I0127 12:34:19.408976    9948 machine.go:93] provisionDockerMachine start ...
	I0127 12:34:19.409295    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:34:21.464326    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:34:21.464326    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:21.464326    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:34:23.974791    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:34:23.975617    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:23.980768    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:34:23.981360    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.198.106 22 <nil> <nil>}
	I0127 12:34:23.981360    9948 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 12:34:24.122270    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 12:34:24.122270    9948 buildroot.go:166] provisioning hostname "multinode-659000"
	I0127 12:34:24.122270    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:34:26.208942    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:34:26.209389    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:26.209479    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:34:28.645671    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:34:28.645949    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:28.650839    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:34:28.650839    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.198.106 22 <nil> <nil>}
	I0127 12:34:28.650839    9948 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-659000 && echo "multinode-659000" | sudo tee /etc/hostname
	I0127 12:34:28.808809    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-659000
	
	I0127 12:34:28.808951    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:34:30.823522    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:34:30.823665    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:30.823720    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:34:33.232639    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:34:33.232639    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:33.238810    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:34:33.239010    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.198.106 22 <nil> <nil>}
	I0127 12:34:33.239010    9948 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-659000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-659000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-659000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:34:33.394842    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:34:33.394842    9948 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0127 12:34:33.394842    9948 buildroot.go:174] setting up certificates
	I0127 12:34:33.394842    9948 provision.go:84] configureAuth start
	I0127 12:34:33.394842    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:34:35.443924    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:34:35.444484    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:35.444592    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:34:37.821223    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:34:37.821223    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:37.821990    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:34:39.846534    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:34:39.846663    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:39.846663    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:34:42.243984    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:34:42.244935    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:42.244935    9948 provision.go:143] copyHostCerts
	I0127 12:34:42.245205    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0127 12:34:42.245326    9948 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0127 12:34:42.245326    9948 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0127 12:34:42.245919    9948 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0127 12:34:42.246658    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0127 12:34:42.247407    9948 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0127 12:34:42.247407    9948 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0127 12:34:42.247760    9948 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0127 12:34:42.248604    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0127 12:34:42.248604    9948 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0127 12:34:42.249132    9948 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0127 12:34:42.249338    9948 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0127 12:34:42.250527    9948 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-659000 san=[127.0.0.1 172.29.198.106 localhost minikube multinode-659000]
	I0127 12:34:42.435902    9948 provision.go:177] copyRemoteCerts
	I0127 12:34:42.446432    9948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:34:42.447011    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:34:44.441075    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:34:44.441992    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:44.442060    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:34:46.880196    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:34:46.881114    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:46.881684    9948 sshutil.go:53] new ssh client: &{IP:172.29.198.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000\id_rsa Username:docker}
	I0127 12:34:46.990000    9948 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.5429415s)
	I0127 12:34:46.990000    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0127 12:34:46.990601    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 12:34:47.032978    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0127 12:34:47.033578    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0127 12:34:47.086735    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0127 12:34:47.087326    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 12:34:47.130626    9948 provision.go:87] duration metric: took 13.7356397s to configureAuth
	I0127 12:34:47.130626    9948 buildroot.go:189] setting minikube options for container-runtime
	I0127 12:34:47.131301    9948 config.go:182] Loaded profile config "multinode-659000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:34:47.131301    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:34:49.119922    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:34:49.119922    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:49.120788    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:34:51.515761    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:34:51.516107    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:51.522691    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:34:51.523381    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.198.106 22 <nil> <nil>}
	I0127 12:34:51.523381    9948 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0127 12:34:51.655115    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0127 12:34:51.655115    9948 buildroot.go:70] root file system type: tmpfs
	I0127 12:34:51.655115    9948 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0127 12:34:51.655115    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:34:53.659970    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:34:53.659970    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:53.659970    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:34:56.093986    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:34:56.093986    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:56.099701    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:34:56.100348    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.198.106 22 <nil> <nil>}
	I0127 12:34:56.100348    9948 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0127 12:34:56.266086    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0127 12:34:56.266086    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:34:58.267768    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:34:58.268056    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:34:58.268056    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:35:00.723131    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:35:00.723131    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:35:00.728427    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:35:00.729159    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.198.106 22 <nil> <nil>}
	I0127 12:35:00.729159    9948 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0127 12:35:03.256939    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0127 12:35:03.257053    9948 machine.go:96] duration metric: took 43.8476164s to provisionDockerMachine
	I0127 12:35:03.257053    9948 start.go:293] postStartSetup for "multinode-659000" (driver="hyperv")
	I0127 12:35:03.257053    9948 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:35:03.267563    9948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:35:03.267563    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:35:05.316508    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:35:05.316508    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:35:05.316664    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:35:07.700356    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:35:07.700593    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:35:07.700593    9948 sshutil.go:53] new ssh client: &{IP:172.29.198.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000\id_rsa Username:docker}
	I0127 12:35:07.811310    9948 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5435572s)
	I0127 12:35:07.821716    9948 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:35:07.829117    9948 command_runner.go:130] > NAME=Buildroot
	I0127 12:35:07.829198    9948 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0127 12:35:07.829198    9948 command_runner.go:130] > ID=buildroot
	I0127 12:35:07.829198    9948 command_runner.go:130] > VERSION_ID=2023.02.9
	I0127 12:35:07.829198    9948 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0127 12:35:07.829325    9948 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 12:35:07.829391    9948 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0127 12:35:07.829690    9948 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0127 12:35:07.830620    9948 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> 59562.pem in /etc/ssl/certs
	I0127 12:35:07.830620    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> /etc/ssl/certs/59562.pem
	I0127 12:35:07.846327    9948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:35:07.871475    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem --> /etc/ssl/certs/59562.pem (1708 bytes)
	I0127 12:35:07.917276    9948 start.go:296] duration metric: took 4.6601745s for postStartSetup
	I0127 12:35:07.917514    9948 fix.go:56] duration metric: took 1m25.9908456s for fixHost
	I0127 12:35:07.917588    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:35:09.946554    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:35:09.946554    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:35:09.946642    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:35:12.420548    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:35:12.421287    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:35:12.425141    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:35:12.425955    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.198.106 22 <nil> <nil>}
	I0127 12:35:12.425955    9948 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 12:35:12.561877    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737981312.574710952
	
	I0127 12:35:12.561877    9948 fix.go:216] guest clock: 1737981312.574710952
	I0127 12:35:12.561877    9948 fix.go:229] Guest: 2025-01-27 12:35:12.574710952 +0000 UTC Remote: 2025-01-27 12:35:07.9175148 +0000 UTC m=+91.825743201 (delta=4.657196152s)
	I0127 12:35:12.561877    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:35:14.604407    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:35:14.604407    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:35:14.605231    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:35:17.014500    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:35:17.015341    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:35:17.020755    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:35:17.021344    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.198.106 22 <nil> <nil>}
	I0127 12:35:17.021344    9948 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1737981312
	I0127 12:35:17.172109    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 27 12:35:12 UTC 2025
	
	I0127 12:35:17.172250    9948 fix.go:236] clock set: Mon Jan 27 12:35:12 UTC 2025
	 (err=<nil>)
	I0127 12:35:17.172250    9948 start.go:83] releasing machines lock for "multinode-659000", held for 1m35.2466899s
	I0127 12:35:17.172582    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:35:19.201686    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:35:19.201800    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:35:19.201800    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:35:21.659472    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:35:21.659472    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:35:21.664728    9948 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0127 12:35:21.664805    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:35:21.676633    9948 ssh_runner.go:195] Run: cat /version.json
	I0127 12:35:21.676891    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:35:23.813105    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:35:23.813105    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:35:23.813105    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:35:23.813729    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:35:23.813729    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:35:23.814092    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:35:26.358433    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:35:26.359150    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:35:26.359862    9948 sshutil.go:53] new ssh client: &{IP:172.29.198.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000\id_rsa Username:docker}
	I0127 12:35:26.380896    9948 main.go:141] libmachine: [stdout =====>] : 172.29.198.106
	
	I0127 12:35:26.380896    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:35:26.381944    9948 sshutil.go:53] new ssh client: &{IP:172.29.198.106 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000\id_rsa Username:docker}
	I0127 12:35:26.456310    9948 command_runner.go:130] > {"iso_version": "v1.35.0", "kicbase_version": "v0.0.45-1736763277-20236", "minikube_version": "v1.35.0", "commit": "3fb24bd87c8c8761e2515e1a9ee13835a389ed68"}
	I0127 12:35:26.456415    9948 ssh_runner.go:235] Completed: cat /version.json: (4.7796547s)
	I0127 12:35:26.468432    9948 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0127 12:35:26.469086    9948 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.8043085s)
	W0127 12:35:26.469086    9948 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0127 12:35:26.470470    9948 ssh_runner.go:195] Run: systemctl --version
	I0127 12:35:26.479670    9948 command_runner.go:130] > systemd 252 (252)
	I0127 12:35:26.479670    9948 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0127 12:35:26.491518    9948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0127 12:35:26.498399    9948 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0127 12:35:26.498399    9948 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 12:35:26.511161    9948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:35:26.536519    9948 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0127 12:35:26.536519    9948 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 12:35:26.536519    9948 start.go:495] detecting cgroup driver to use...
	I0127 12:35:26.536519    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:35:26.570419    9948 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	W0127 12:35:26.583646    9948 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0127 12:35:26.583646    9948 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0127 12:35:26.588643    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 12:35:26.616375    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0127 12:35:26.634640    9948 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 12:35:26.644912    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 12:35:26.673793    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:35:26.701860    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 12:35:26.731973    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:35:26.759279    9948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:35:26.787275    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 12:35:26.816442    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 12:35:26.846113    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 12:35:26.875684    9948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:35:26.893061    9948 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:35:26.893259    9948 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:35:26.905737    9948 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 12:35:26.938047    9948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:35:26.968644    9948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:35:27.155657    9948 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 12:35:27.182779    9948 start.go:495] detecting cgroup driver to use...
	I0127 12:35:27.193269    9948 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0127 12:35:27.217485    9948 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0127 12:35:27.217485    9948 command_runner.go:130] > [Unit]
	I0127 12:35:27.217536    9948 command_runner.go:130] > Description=Docker Application Container Engine
	I0127 12:35:27.217536    9948 command_runner.go:130] > Documentation=https://docs.docker.com
	I0127 12:35:27.217536    9948 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0127 12:35:27.217536    9948 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0127 12:35:27.217536    9948 command_runner.go:130] > StartLimitBurst=3
	I0127 12:35:27.217585    9948 command_runner.go:130] > StartLimitIntervalSec=60
	I0127 12:35:27.217656    9948 command_runner.go:130] > [Service]
	I0127 12:35:27.217656    9948 command_runner.go:130] > Type=notify
	I0127 12:35:27.217656    9948 command_runner.go:130] > Restart=on-failure
	I0127 12:35:27.217721    9948 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0127 12:35:27.217721    9948 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0127 12:35:27.217721    9948 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0127 12:35:27.217721    9948 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0127 12:35:27.217721    9948 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0127 12:35:27.217721    9948 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0127 12:35:27.217721    9948 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0127 12:35:27.217811    9948 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0127 12:35:27.217811    9948 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0127 12:35:27.217867    9948 command_runner.go:130] > ExecStart=
	I0127 12:35:27.217891    9948 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0127 12:35:27.217891    9948 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0127 12:35:27.217920    9948 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0127 12:35:27.217949    9948 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0127 12:35:27.217949    9948 command_runner.go:130] > LimitNOFILE=infinity
	I0127 12:35:27.217949    9948 command_runner.go:130] > LimitNPROC=infinity
	I0127 12:35:27.217949    9948 command_runner.go:130] > LimitCORE=infinity
	I0127 12:35:27.217949    9948 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0127 12:35:27.217949    9948 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0127 12:35:27.218007    9948 command_runner.go:130] > TasksMax=infinity
	I0127 12:35:27.218007    9948 command_runner.go:130] > TimeoutStartSec=0
	I0127 12:35:27.218007    9948 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0127 12:35:27.218007    9948 command_runner.go:130] > Delegate=yes
	I0127 12:35:27.218007    9948 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0127 12:35:27.218052    9948 command_runner.go:130] > KillMode=process
	I0127 12:35:27.218052    9948 command_runner.go:130] > [Install]
	I0127 12:35:27.218052    9948 command_runner.go:130] > WantedBy=multi-user.target
	I0127 12:35:27.228679    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 12:35:27.261181    9948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 12:35:27.299697    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 12:35:27.331802    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 12:35:27.362225    9948 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 12:35:27.425537    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 12:35:27.447502    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:35:27.478887    9948 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0127 12:35:27.489443    9948 ssh_runner.go:195] Run: which cri-dockerd
	I0127 12:35:27.495409    9948 command_runner.go:130] > /usr/bin/cri-dockerd
	I0127 12:35:27.505510    9948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0127 12:35:27.525120    9948 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0127 12:35:27.564210    9948 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0127 12:35:27.750206    9948 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0127 12:35:27.928554    9948 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0127 12:35:27.928850    9948 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0127 12:35:27.970096    9948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:35:28.170454    9948 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0127 12:35:30.856767    9948 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.686234s)
	I0127 12:35:30.868578    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0127 12:35:30.900902    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0127 12:35:30.939319    9948 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0127 12:35:31.146599    9948 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0127 12:35:31.332394    9948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:35:31.498147    9948 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0127 12:35:31.536968    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0127 12:35:31.569205    9948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:35:31.743832    9948 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0127 12:35:31.839150    9948 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0127 12:35:31.851132    9948 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0127 12:35:31.862665    9948 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0127 12:35:31.862665    9948 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0127 12:35:31.862665    9948 command_runner.go:130] > Device: 0,22	Inode: 848         Links: 1
	I0127 12:35:31.862665    9948 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0127 12:35:31.862665    9948 command_runner.go:130] > Access: 2025-01-27 12:35:31.778144827 +0000
	I0127 12:35:31.862665    9948 command_runner.go:130] > Modify: 2025-01-27 12:35:31.778144827 +0000
	I0127 12:35:31.862665    9948 command_runner.go:130] > Change: 2025-01-27 12:35:31.781144837 +0000
	I0127 12:35:31.862665    9948 command_runner.go:130] >  Birth: -
	I0127 12:35:31.862665    9948 start.go:563] Will wait 60s for crictl version
	I0127 12:35:31.872553    9948 ssh_runner.go:195] Run: which crictl
	I0127 12:35:31.879243    9948 command_runner.go:130] > /usr/bin/crictl
	I0127 12:35:31.888699    9948 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:35:31.943263    9948 command_runner.go:130] > Version:  0.1.0
	I0127 12:35:31.943263    9948 command_runner.go:130] > RuntimeName:  docker
	I0127 12:35:31.943263    9948 command_runner.go:130] > RuntimeVersion:  27.4.0
	I0127 12:35:31.943320    9948 command_runner.go:130] > RuntimeApiVersion:  v1
	I0127 12:35:31.943320    9948 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0127 12:35:31.956537    9948 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 12:35:31.989370    9948 command_runner.go:130] > 27.4.0
	I0127 12:35:31.998230    9948 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 12:35:32.026782    9948 command_runner.go:130] > 27.4.0
	I0127 12:35:32.030346    9948 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0127 12:35:32.030579    9948 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0127 12:35:32.035536    9948 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0127 12:35:32.035536    9948 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0127 12:35:32.035536    9948 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0127 12:35:32.035536    9948 ip.go:211] Found interface: {Index:17 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:43:05:a6 Flags:up|broadcast|multicast|running}
	I0127 12:35:32.038296    9948 ip.go:214] interface addr: fe80::8ceb:a58b:811a:7c79/64
	I0127 12:35:32.039357    9948 ip.go:214] interface addr: 172.29.192.1/20
	I0127 12:35:32.052435    9948 ssh_runner.go:195] Run: grep 172.29.192.1	host.minikube.internal$ /etc/hosts
	I0127 12:35:32.058836    9948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:35:32.083737    9948 kubeadm.go:883] updating cluster {Name:multinode-659000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-659000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.198.106 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.199.129 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.29.206.88 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:fa
lse istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0127 12:35:32.084263    9948 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0127 12:35:32.094131    9948 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0127 12:35:32.121250    9948 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.32.1
	I0127 12:35:32.121250    9948 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.32.1
	I0127 12:35:32.121250    9948 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 12:35:32.121250    9948 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.32.1
	I0127 12:35:32.122089    9948 command_runner.go:130] > kindest/kindnetd:v20241108-5c6d2daf
	I0127 12:35:32.122089    9948 command_runner.go:130] > registry.k8s.io/etcd:3.5.16-0
	I0127 12:35:32.122089    9948 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0127 12:35:32.122089    9948 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0127 12:35:32.122089    9948 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:35:32.122089    9948 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0127 12:35:32.122089    9948 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	kindest/kindnetd:v20241108-5c6d2daf
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0127 12:35:32.122089    9948 docker.go:619] Images already preloaded, skipping extraction
	I0127 12:35:32.131547    9948 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0127 12:35:32.156708    9948 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.32.1
	I0127 12:35:32.156708    9948 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.32.1
	I0127 12:35:32.156708    9948 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.32.1
	I0127 12:35:32.156788    9948 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.32.1
	I0127 12:35:32.156788    9948 command_runner.go:130] > kindest/kindnetd:v20241108-5c6d2daf
	I0127 12:35:32.156823    9948 command_runner.go:130] > registry.k8s.io/etcd:3.5.16-0
	I0127 12:35:32.156823    9948 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0127 12:35:32.156823    9948 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0127 12:35:32.156823    9948 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0127 12:35:32.156823    9948 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0127 12:35:32.156888    9948 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.1
	registry.k8s.io/kube-controller-manager:v1.32.1
	registry.k8s.io/kube-scheduler:v1.32.1
	registry.k8s.io/kube-proxy:v1.32.1
	kindest/kindnetd:v20241108-5c6d2daf
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0127 12:35:32.156888    9948 cache_images.go:84] Images are preloaded, skipping loading
	I0127 12:35:32.157020    9948 kubeadm.go:934] updating node { 172.29.198.106 8443 v1.32.1 docker true true} ...
	I0127 12:35:32.157251    9948 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-659000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.29.198.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.1 ClusterName:multinode-659000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0127 12:35:32.166793    9948 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0127 12:35:32.233267    9948 command_runner.go:130] > cgroupfs
	I0127 12:35:32.233385    9948 cni.go:84] Creating CNI manager for ""
	I0127 12:35:32.233385    9948 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0127 12:35:32.233471    9948 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0127 12:35:32.233540    9948 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.29.198.106 APIServerPort:8443 KubernetesVersion:v1.32.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-659000 NodeName:multinode-659000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.29.198.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.29.198.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0127 12:35:32.233784    9948 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.29.198.106
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-659000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.29.198.106"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.29.198.106"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0127 12:35:32.245885    9948 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.1
	I0127 12:35:32.265189    9948 command_runner.go:130] > kubeadm
	I0127 12:35:32.265189    9948 command_runner.go:130] > kubectl
	I0127 12:35:32.265239    9948 command_runner.go:130] > kubelet
	I0127 12:35:32.265239    9948 binaries.go:44] Found k8s binaries, skipping transfer
	I0127 12:35:32.279660    9948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0127 12:35:32.297475    9948 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0127 12:35:32.326698    9948 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0127 12:35:32.354455    9948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2300 bytes)
	I0127 12:35:32.396719    9948 ssh_runner.go:195] Run: grep 172.29.198.106	control-plane.minikube.internal$ /etc/hosts
	I0127 12:35:32.403001    9948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.29.198.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:35:32.433908    9948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:35:32.607554    9948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:35:32.635931    9948 certs.go:68] Setting up C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000 for IP: 172.29.198.106
	I0127 12:35:32.636017    9948 certs.go:194] generating shared ca certs ...
	I0127 12:35:32.636017    9948 certs.go:226] acquiring lock for ca certs: {Name:mkc71405905d3cea24da832e98113e061e759324 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:35:32.636956    9948 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key
	I0127 12:35:32.637363    9948 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key
	I0127 12:35:32.637578    9948 certs.go:256] generating profile certs ...
	I0127 12:35:32.638317    9948 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\client.key
	I0127 12:35:32.638565    9948 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.key.8dbcec51
	I0127 12:35:32.638703    9948 crypto.go:68] Generating cert C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.crt.8dbcec51 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.29.198.106]
	I0127 12:35:32.915804    9948 crypto.go:156] Writing cert to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.crt.8dbcec51 ...
	I0127 12:35:32.916832    9948 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.crt.8dbcec51: {Name:mk0bc2c577d2d85da05a757ce498d238f017bb3d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:35:32.917811    9948 crypto.go:164] Writing key to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.key.8dbcec51 ...
	I0127 12:35:32.917811    9948 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.key.8dbcec51: {Name:mka016434d6d6285c6597b5a27e613438132168c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:35:32.918411    9948 certs.go:381] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.crt.8dbcec51 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.crt
	I0127 12:35:32.932671    9948 certs.go:385] copying C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.key.8dbcec51 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.key
	I0127 12:35:32.934971    9948 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\proxy-client.key
	I0127 12:35:32.934971    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0127 12:35:32.935300    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0127 12:35:32.935469    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0127 12:35:32.935535    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0127 12:35:32.935838    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0127 12:35:32.935992    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0127 12:35:32.936305    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0127 12:35:32.936305    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0127 12:35:32.936844    9948 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem (1338 bytes)
	W0127 12:35:32.937452    9948 certs.go:480] ignoring C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956_empty.pem, impossibly tiny 0 bytes
	I0127 12:35:32.937452    9948 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0127 12:35:32.937871    9948 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0127 12:35:32.938226    9948 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0127 12:35:32.938226    9948 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I0127 12:35:32.938226    9948 certs.go:484] found cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem (1708 bytes)
	I0127 12:35:32.938226    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:35:32.938226    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem -> /usr/share/ca-certificates/5956.pem
	I0127 12:35:32.939412    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> /usr/share/ca-certificates/59562.pem
	I0127 12:35:32.940639    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0127 12:35:32.992212    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0127 12:35:33.031894    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0127 12:35:33.081403    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0127 12:35:33.125225    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0127 12:35:33.166348    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0127 12:35:33.211858    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0127 12:35:33.253039    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0127 12:35:33.300278    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0127 12:35:33.343433    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\5956.pem --> /usr/share/ca-certificates/5956.pem (1338 bytes)
	I0127 12:35:33.390186    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem --> /usr/share/ca-certificates/59562.pem (1708 bytes)
	I0127 12:35:33.432257    9948 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0127 12:35:33.473824    9948 ssh_runner.go:195] Run: openssl version
	I0127 12:35:33.481989    9948 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0127 12:35:33.491533    9948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5956.pem && ln -fs /usr/share/ca-certificates/5956.pem /etc/ssl/certs/5956.pem"
	I0127 12:35:33.517440    9948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5956.pem
	I0127 12:35:33.524004    9948 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 27 10:52 /usr/share/ca-certificates/5956.pem
	I0127 12:35:33.525172    9948 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 27 10:52 /usr/share/ca-certificates/5956.pem
	I0127 12:35:33.538098    9948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5956.pem
	I0127 12:35:33.545660    9948 command_runner.go:130] > 51391683
	I0127 12:35:33.556141    9948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5956.pem /etc/ssl/certs/51391683.0"
	I0127 12:35:33.584743    9948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/59562.pem && ln -fs /usr/share/ca-certificates/59562.pem /etc/ssl/certs/59562.pem"
	I0127 12:35:33.610589    9948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/59562.pem
	I0127 12:35:33.618085    9948 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 27 10:52 /usr/share/ca-certificates/59562.pem
	I0127 12:35:33.618085    9948 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 27 10:52 /usr/share/ca-certificates/59562.pem
	I0127 12:35:33.627711    9948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/59562.pem
	I0127 12:35:33.635525    9948 command_runner.go:130] > 3ec20f2e
	I0127 12:35:33.645737    9948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/59562.pem /etc/ssl/certs/3ec20f2e.0"
	I0127 12:35:33.671803    9948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0127 12:35:33.699427    9948 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:35:33.705546    9948 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 27 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:35:33.705546    9948 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 27 10:36 /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:35:33.715843    9948 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0127 12:35:33.724183    9948 command_runner.go:130] > b5213941
	I0127 12:35:33.734350    9948 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0127 12:35:33.765332    9948 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:35:33.772366    9948 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0127 12:35:33.772366    9948 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0127 12:35:33.772366    9948 command_runner.go:130] > Device: 8,1	Inode: 3148641     Links: 1
	I0127 12:35:33.772466    9948 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0127 12:35:33.772466    9948 command_runner.go:130] > Access: 2025-01-27 12:11:47.940042269 +0000
	I0127 12:35:33.772466    9948 command_runner.go:130] > Modify: 2025-01-27 12:11:47.940042269 +0000
	I0127 12:35:33.772466    9948 command_runner.go:130] > Change: 2025-01-27 12:11:47.940042269 +0000
	I0127 12:35:33.772524    9948 command_runner.go:130] >  Birth: 2025-01-27 12:11:47.940042269 +0000
	I0127 12:35:33.780865    9948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0127 12:35:33.789536    9948 command_runner.go:130] > Certificate will not expire
	I0127 12:35:33.799657    9948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0127 12:35:33.807439    9948 command_runner.go:130] > Certificate will not expire
	I0127 12:35:33.817568    9948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0127 12:35:33.826161    9948 command_runner.go:130] > Certificate will not expire
	I0127 12:35:33.836213    9948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0127 12:35:33.847913    9948 command_runner.go:130] > Certificate will not expire
	I0127 12:35:33.857820    9948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0127 12:35:33.866078    9948 command_runner.go:130] > Certificate will not expire
	I0127 12:35:33.875461    9948 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0127 12:35:33.882738    9948 command_runner.go:130] > Certificate will not expire
	I0127 12:35:33.882738    9948 kubeadm.go:392] StartCluster: {Name:multinode-659000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:multinode-659000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.198.106 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.29.199.129 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.29.206.88 Port:0 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false
istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 12:35:33.891709    9948 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0127 12:35:33.925944    9948 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0127 12:35:33.944341    9948 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0127 12:35:33.944341    9948 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0127 12:35:33.944341    9948 command_runner.go:130] > /var/lib/minikube/etcd:
	I0127 12:35:33.944341    9948 command_runner.go:130] > member
	I0127 12:35:33.944341    9948 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0127 12:35:33.944341    9948 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0127 12:35:33.955335    9948 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0127 12:35:33.974424    9948 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0127 12:35:33.975338    9948 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-659000" does not appear in C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0127 12:35:33.976433    9948 kubeconfig.go:62] C:\Users\jenkins.minikube6\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-659000" cluster setting kubeconfig missing "multinode-659000" context setting]
	I0127 12:35:33.977390    9948 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:35:33.995095    9948 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0127 12:35:33.995377    9948 kapi.go:59] client config for multinode-659000: &rest.Config{Host:"https://172.29.198.106:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-659000/client.crt", KeyFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\multinode-659000/client.key", CAFile:"C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x301e420), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0127 12:35:33.996538    9948 cert_rotation.go:140] Starting client certificate rotation controller
	I0127 12:35:34.007906    9948 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0127 12:35:34.025167    9948 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0127 12:35:34.025222    9948 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0127 12:35:34.025222    9948 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0127 12:35:34.025222    9948 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta4
	I0127 12:35:34.025222    9948 command_runner.go:130] >  kind: InitConfiguration
	I0127 12:35:34.025222    9948 command_runner.go:130] >  localAPIEndpoint:
	I0127 12:35:34.025301    9948 command_runner.go:130] > -  advertiseAddress: 172.29.204.17
	I0127 12:35:34.025301    9948 command_runner.go:130] > +  advertiseAddress: 172.29.198.106
	I0127 12:35:34.025301    9948 command_runner.go:130] >    bindPort: 8443
	I0127 12:35:34.025301    9948 command_runner.go:130] >  bootstrapTokens:
	I0127 12:35:34.025301    9948 command_runner.go:130] >    - groups:
	I0127 12:35:34.025301    9948 command_runner.go:130] > @@ -15,13 +15,13 @@
	I0127 12:35:34.025301    9948 command_runner.go:130] >    name: "multinode-659000"
	I0127 12:35:34.025301    9948 command_runner.go:130] >    kubeletExtraArgs:
	I0127 12:35:34.025301    9948 command_runner.go:130] >      - name: "node-ip"
	I0127 12:35:34.025301    9948 command_runner.go:130] > -      value: "172.29.204.17"
	I0127 12:35:34.025399    9948 command_runner.go:130] > +      value: "172.29.198.106"
	I0127 12:35:34.025399    9948 command_runner.go:130] >    taints: []
	I0127 12:35:34.025399    9948 command_runner.go:130] >  ---
	I0127 12:35:34.025441    9948 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta4
	I0127 12:35:34.025441    9948 command_runner.go:130] >  kind: ClusterConfiguration
	I0127 12:35:34.025441    9948 command_runner.go:130] >  apiServer:
	I0127 12:35:34.025441    9948 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.29.204.17"]
	I0127 12:35:34.025441    9948 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.29.198.106"]
	I0127 12:35:34.025441    9948 command_runner.go:130] >    extraArgs:
	I0127 12:35:34.025495    9948 command_runner.go:130] >      - name: "enable-admission-plugins"
	I0127 12:35:34.025495    9948 command_runner.go:130] >        value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0127 12:35:34.025533    9948 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.29.204.17
	+  advertiseAddress: 172.29.198.106
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -15,13 +15,13 @@
	   name: "multinode-659000"
	   kubeletExtraArgs:
	     - name: "node-ip"
	-      value: "172.29.204.17"
	+      value: "172.29.198.106"
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.29.204.17"]
	+  certSANs: ["127.0.0.1", "localhost", "172.29.198.106"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	       value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	
	-- /stdout --
	I0127 12:35:34.025596    9948 kubeadm.go:1160] stopping kube-system containers ...
	I0127 12:35:34.034084    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0127 12:35:34.065879    9948 command_runner.go:130] > f818dd15d8b0
	I0127 12:35:34.066789    9948 command_runner.go:130] > 134620caeeb9
	I0127 12:35:34.066789    9948 command_runner.go:130] > bc9ef8ee86ec
	I0127 12:35:34.066789    9948 command_runner.go:130] > 4a53e133a1cd
	I0127 12:35:34.066789    9948 command_runner.go:130] > d758000dda95
	I0127 12:35:34.066789    9948 command_runner.go:130] > bbec7ccef7da
	I0127 12:35:34.066789    9948 command_runner.go:130] > f2d0bd65fe50
	I0127 12:35:34.066851    9948 command_runner.go:130] > 319cddeebceb
	I0127 12:35:34.066851    9948 command_runner.go:130] > a16e06a03860
	I0127 12:35:34.066851    9948 command_runner.go:130] > e07a66f8f619
	I0127 12:35:34.066881    9948 command_runner.go:130] > 5f274e5a8851
	I0127 12:35:34.066881    9948 command_runner.go:130] > f91e9c2d3ba6
	I0127 12:35:34.066881    9948 command_runner.go:130] > 1b522c4c9f4c
	I0127 12:35:34.066881    9948 command_runner.go:130] > 51ee4649b24a
	I0127 12:35:34.066881    9948 command_runner.go:130] > 1bd5bf99bede
	I0127 12:35:34.066881    9948 command_runner.go:130] > 5423fc511329
	I0127 12:35:34.066881    9948 docker.go:483] Stopping containers: [f818dd15d8b0 134620caeeb9 bc9ef8ee86ec 4a53e133a1cd d758000dda95 bbec7ccef7da f2d0bd65fe50 319cddeebceb a16e06a03860 e07a66f8f619 5f274e5a8851 f91e9c2d3ba6 1b522c4c9f4c 51ee4649b24a 1bd5bf99bede 5423fc511329]
	I0127 12:35:34.077725    9948 ssh_runner.go:195] Run: docker stop f818dd15d8b0 134620caeeb9 bc9ef8ee86ec 4a53e133a1cd d758000dda95 bbec7ccef7da f2d0bd65fe50 319cddeebceb a16e06a03860 e07a66f8f619 5f274e5a8851 f91e9c2d3ba6 1b522c4c9f4c 51ee4649b24a 1bd5bf99bede 5423fc511329
	I0127 12:35:34.104726    9948 command_runner.go:130] > f818dd15d8b0
	I0127 12:35:34.104726    9948 command_runner.go:130] > 134620caeeb9
	I0127 12:35:34.104726    9948 command_runner.go:130] > bc9ef8ee86ec
	I0127 12:35:34.104726    9948 command_runner.go:130] > 4a53e133a1cd
	I0127 12:35:34.104726    9948 command_runner.go:130] > d758000dda95
	I0127 12:35:34.104726    9948 command_runner.go:130] > bbec7ccef7da
	I0127 12:35:34.104726    9948 command_runner.go:130] > f2d0bd65fe50
	I0127 12:35:34.104726    9948 command_runner.go:130] > 319cddeebceb
	I0127 12:35:34.104726    9948 command_runner.go:130] > a16e06a03860
	I0127 12:35:34.105649    9948 command_runner.go:130] > e07a66f8f619
	I0127 12:35:34.105649    9948 command_runner.go:130] > 5f274e5a8851
	I0127 12:35:34.105649    9948 command_runner.go:130] > f91e9c2d3ba6
	I0127 12:35:34.105649    9948 command_runner.go:130] > 1b522c4c9f4c
	I0127 12:35:34.105649    9948 command_runner.go:130] > 51ee4649b24a
	I0127 12:35:34.105649    9948 command_runner.go:130] > 1bd5bf99bede
	I0127 12:35:34.105726    9948 command_runner.go:130] > 5423fc511329
	I0127 12:35:34.119381    9948 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0127 12:35:34.168359    9948 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0127 12:35:34.187564    9948 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0127 12:35:34.187786    9948 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0127 12:35:34.187933    9948 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0127 12:35:34.187933    9948 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:35:34.188220    9948 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0127 12:35:34.188220    9948 kubeadm.go:157] found existing configuration files:
	
	I0127 12:35:34.199979    9948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0127 12:35:34.216712    9948 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:35:34.218042    9948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0127 12:35:34.229551    9948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0127 12:35:34.256966    9948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0127 12:35:34.272571    9948 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:35:34.272865    9948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0127 12:35:34.284645    9948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0127 12:35:34.320902    9948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0127 12:35:34.338787    9948 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:35:34.339721    9948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0127 12:35:34.351390    9948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0127 12:35:34.382915    9948 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0127 12:35:34.409553    9948 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:35:34.410825    9948 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0127 12:35:34.421087    9948 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0127 12:35:34.449066    9948 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0127 12:35:34.466099    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:35:34.777331    9948 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0127 12:35:34.777331    9948 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0127 12:35:34.777331    9948 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0127 12:35:34.777331    9948 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0127 12:35:34.777460    9948 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0127 12:35:34.777460    9948 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0127 12:35:34.777460    9948 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0127 12:35:34.777460    9948 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0127 12:35:34.777460    9948 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0127 12:35:34.777571    9948 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0127 12:35:34.777571    9948 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0127 12:35:34.777571    9948 command_runner.go:130] > [certs] Using the existing "sa" key
	I0127 12:35:34.777703    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:35:35.793913    9948 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0127 12:35:35.793913    9948 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0127 12:35:35.793913    9948 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0127 12:35:35.793913    9948 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0127 12:35:35.793913    9948 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0127 12:35:35.793913    9948 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0127 12:35:35.793913    9948 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.0161988s)
	I0127 12:35:35.793913    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:35:36.085887    9948 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0127 12:35:36.085887    9948 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0127 12:35:36.085887    9948 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0127 12:35:36.085887    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:35:36.179991    9948 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0127 12:35:36.180081    9948 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0127 12:35:36.180081    9948 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0127 12:35:36.180081    9948 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0127 12:35:36.180150    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:35:36.259906    9948 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0127 12:35:36.259906    9948 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:35:36.268905    9948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:36.771952    9948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:37.270661    9948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:37.769361    9948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:38.271519    9948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:35:38.299541    9948 command_runner.go:130] > 2017
	I0127 12:35:38.299541    9948 api_server.go:72] duration metric: took 2.0396144s to wait for apiserver process to appear ...
	I0127 12:35:38.299541    9948 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:35:38.299541    9948 api_server.go:253] Checking apiserver healthz at https://172.29.198.106:8443/healthz ...
	I0127 12:35:41.371814    9948 api_server.go:279] https://172.29.198.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 12:35:41.371814    9948 api_server.go:103] status: https://172.29.198.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 12:35:41.371947    9948 api_server.go:253] Checking apiserver healthz at https://172.29.198.106:8443/healthz ...
	I0127 12:35:41.403172    9948 api_server.go:279] https://172.29.198.106:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0127 12:35:41.403908    9948 api_server.go:103] status: https://172.29.198.106:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0127 12:35:41.800314    9948 api_server.go:253] Checking apiserver healthz at https://172.29.198.106:8443/healthz ...
	I0127 12:35:41.810254    9948 api_server.go:279] https://172.29.198.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:35:41.810303    9948 api_server.go:103] status: https://172.29.198.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:35:42.300026    9948 api_server.go:253] Checking apiserver healthz at https://172.29.198.106:8443/healthz ...
	I0127 12:35:42.307320    9948 api_server.go:279] https://172.29.198.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:35:42.307320    9948 api_server.go:103] status: https://172.29.198.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:35:42.801235    9948 api_server.go:253] Checking apiserver healthz at https://172.29.198.106:8443/healthz ...
	I0127 12:35:42.811831    9948 api_server.go:279] https://172.29.198.106:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0127 12:35:42.811831    9948 api_server.go:103] status: https://172.29.198.106:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0127 12:35:43.300245    9948 api_server.go:253] Checking apiserver healthz at https://172.29.198.106:8443/healthz ...
	I0127 12:35:43.308109    9948 api_server.go:279] https://172.29.198.106:8443/healthz returned 200:
	ok
	I0127 12:35:43.309250    9948 round_trippers.go:463] GET https://172.29.198.106:8443/version
	I0127 12:35:43.309250    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:43.309250    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:43.309316    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:43.323759    9948 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0127 12:35:43.323857    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:43.323857    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:43.323857    9948 round_trippers.go:580]     Content-Length: 263
	I0127 12:35:43.323857    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:43 GMT
	I0127 12:35:43.323857    9948 round_trippers.go:580]     Audit-Id: e6b2733b-3baf-477a-b2db-40e5fbda5916
	I0127 12:35:43.323857    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:43.323857    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:43.323857    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:43.324050    9948 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "32",
	  "gitVersion": "v1.32.1",
	  "gitCommit": "e9c9be4007d1664e68796af02b8978640d2c1b26",
	  "gitTreeState": "clean",
	  "buildDate": "2025-01-15T14:31:55Z",
	  "goVersion": "go1.23.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0127 12:35:43.324193    9948 api_server.go:141] control plane version: v1.32.1
	I0127 12:35:43.324250    9948 api_server.go:131] duration metric: took 5.0246562s to wait for apiserver health ...
	I0127 12:35:43.324250    9948 cni.go:84] Creating CNI manager for ""
	I0127 12:35:43.324310    9948 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0127 12:35:43.328300    9948 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0127 12:35:43.343783    9948 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0127 12:35:43.352289    9948 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0127 12:35:43.352289    9948 command_runner.go:130] >   Size: 3103192   	Blocks: 6064       IO Block: 4096   regular file
	I0127 12:35:43.352289    9948 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0127 12:35:43.352289    9948 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0127 12:35:43.352289    9948 command_runner.go:130] > Access: 2025-01-27 12:34:12.535327600 +0000
	I0127 12:35:43.352289    9948 command_runner.go:130] > Modify: 2025-01-14 09:03:58.000000000 +0000
	I0127 12:35:43.352289    9948 command_runner.go:130] > Change: 2025-01-27 12:34:04.059000000 +0000
	I0127 12:35:43.352289    9948 command_runner.go:130] >  Birth: -
	I0127 12:35:43.352528    9948 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.1/kubectl ...
	I0127 12:35:43.352600    9948 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0127 12:35:43.447309    9948 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0127 12:35:44.622432    9948 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0127 12:35:44.622527    9948 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0127 12:35:44.622527    9948 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0127 12:35:44.622527    9948 command_runner.go:130] > daemonset.apps/kindnet configured
	I0127 12:35:44.622527    9948 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.32.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.1752062s)
	I0127 12:35:44.622655    9948 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:35:44.622882    9948 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0127 12:35:44.622882    9948 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0127 12:35:44.623115    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods
	I0127 12:35:44.623115    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:44.623162    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:44.623162    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:44.686897    9948 round_trippers.go:574] Response Status: 200 OK in 63 milliseconds
	I0127 12:35:44.686897    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:44.686897    9948 round_trippers.go:580]     Audit-Id: 0888cc0e-7012-4657-adcf-f78ed48588b5
	I0127 12:35:44.686897    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:44.686897    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:44.686897    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:44.686897    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:44.686897    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:44 GMT
	I0127 12:35:44.693884    9948 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1891"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 91550 chars]
	I0127 12:35:44.701115    9948 system_pods.go:59] 12 kube-system pods found
	I0127 12:35:44.701179    9948 system_pods.go:61] "coredns-668d6bf9bc-2qw6w" [8f0367fc-d842-4cc3-8e71-30869a548243] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0127 12:35:44.701179    9948 system_pods.go:61] "etcd-multinode-659000" [4c33fa42-51a7-4a7a-a497-cce80b8773d6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0127 12:35:44.701179    9948 system_pods.go:61] "kindnet-kpfjt" [b00e6ead-b072-40b5-9c87-7697316d8107] Running
	I0127 12:35:44.701179    9948 system_pods.go:61] "kindnet-n7vjl" [23617db6-b970-4ead-845b-69776d50ffef] Running
	I0127 12:35:44.701308    9948 system_pods.go:61] "kindnet-z2hqq" [9b617a9c-e2b8-45fd-bee2-45cb03d4cd42] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0127 12:35:44.701308    9948 system_pods.go:61] "kube-apiserver-multinode-659000" [8fbee94f-fd8f-4431-bd9f-b75d49cb19d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0127 12:35:44.701308    9948 system_pods.go:61] "kube-controller-manager-multinode-659000" [8be02f36-161c-44f3-b526-56db3b8a007a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0127 12:35:44.701308    9948 system_pods.go:61] "kube-proxy-pjhc8" [ddb6698c-b83d-4a49-9672-c894e87cbb66] Running
	I0127 12:35:44.701308    9948 system_pods.go:61] "kube-proxy-s46mv" [ae3b8daf-d674-4cfe-8652-cb5ff6ba8615] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0127 12:35:44.701308    9948 system_pods.go:61] "kube-proxy-sk5js" [ba679e1d-713c-4bd4-b267-2b887c1ac4df] Running
	I0127 12:35:44.701308    9948 system_pods.go:61] "kube-scheduler-multinode-659000" [52b91964-a331-4925-9e07-c8df32b4176d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0127 12:35:44.701308    9948 system_pods.go:61] "storage-provisioner" [bcfd7913-1bc0-4c24-882f-2be92ec9b046] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0127 12:35:44.701308    9948 system_pods.go:74] duration metric: took 78.5775ms to wait for pod list to return data ...
	I0127 12:35:44.701308    9948 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:35:44.701308    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes
	I0127 12:35:44.701308    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:44.701308    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:44.701308    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:44.779677    9948 round_trippers.go:574] Response Status: 200 OK in 78 milliseconds
	I0127 12:35:44.779818    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:44.779884    9948 round_trippers.go:580]     Audit-Id: 9eed8ae3-6e78-4019-8c87-04d758d98dbb
	I0127 12:35:44.779884    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:44.779884    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:44.779884    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:44.779884    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:44.779884    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:44 GMT
	I0127 12:35:44.780081    9948 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1892"},"items":[{"metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1813","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15631 chars]
	I0127 12:35:44.781830    9948 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:35:44.781830    9948 node_conditions.go:123] node cpu capacity is 2
	I0127 12:35:44.781830    9948 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:35:44.781830    9948 node_conditions.go:123] node cpu capacity is 2
	I0127 12:35:44.781830    9948 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:35:44.781830    9948 node_conditions.go:123] node cpu capacity is 2
	I0127 12:35:44.781830    9948 node_conditions.go:105] duration metric: took 80.5203ms to run NodePressure ...
	I0127 12:35:44.781830    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0127 12:35:45.349385    9948 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0127 12:35:45.349385    9948 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0127 12:35:45.349385    9948 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0127 12:35:45.349385    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0127 12:35:45.349385    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:45.349385    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:45.349385    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:45.361302    9948 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0127 12:35:45.361383    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:45.361406    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:45.361406    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:45.361406    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:45.361406    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:45.361406    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:45 GMT
	I0127 12:35:45.361406    9948 round_trippers.go:580]     Audit-Id: 889af32d-71d8-434c-a98e-d987fbb0f3ff
	I0127 12:35:45.361487    9948 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1916"},"items":[{"metadata":{"name":"etcd-multinode-659000","namespace":"kube-system","uid":"4c33fa42-51a7-4a7a-a497-cce80b8773d6","resourceVersion":"1864","creationTimestamp":"2025-01-27T12:35:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.29.198.106:2379","kubernetes.io/config.hash":"575cefa3aa8017dce576fa244e719a4e","kubernetes.io/config.mirror":"575cefa3aa8017dce576fa244e719a4e","kubernetes.io/config.seen":"2025-01-27T12:35:36.285837685Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:35:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotati
ons":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f [truncated 31716 chars]
	I0127 12:35:45.363020    9948 kubeadm.go:739] kubelet initialised
	I0127 12:35:45.363020    9948 kubeadm.go:740] duration metric: took 13.6343ms waiting for restarted kubelet to initialise ...
	I0127 12:35:45.363544    9948 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:35:45.363706    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods
	I0127 12:35:45.363727    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:45.363768    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:45.363768    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:45.368502    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:45.368502    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:45.368502    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:45 GMT
	I0127 12:35:45.368502    9948 round_trippers.go:580]     Audit-Id: dadc4cc3-64a9-4610-9a1f-ea232d5aa1c0
	I0127 12:35:45.368502    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:45.368502    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:45.368502    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:45.368502    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:45.370527    9948 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1916"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90957 chars]
	I0127 12:35:45.373525    9948 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:45.373525    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:35:45.373525    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:45.373525    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:45.373525    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:45.376514    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:35:45.376514    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:45.376514    9948 round_trippers.go:580]     Audit-Id: b0f7f9e2-cb38-4be6-b4e7-6a0f4fbb5651
	I0127 12:35:45.376514    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:45.376514    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:45.376514    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:45.376514    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:45.376514    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:45 GMT
	I0127 12:35:45.376514    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:35:45.377530    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:45.377530    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:45.377530    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:45.377530    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:45.381523    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:35:45.381523    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:45.381523    9948 round_trippers.go:580]     Audit-Id: fbf77371-a0a9-4c29-a553-0ef80275ac50
	I0127 12:35:45.381523    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:45.381523    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:45.381523    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:45.381523    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:45.381523    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:45 GMT
	I0127 12:35:45.381523    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:45.382516    9948 pod_ready.go:98] node "multinode-659000" hosting pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000" has status "Ready":"False"
	I0127 12:35:45.382516    9948 pod_ready.go:82] duration metric: took 8.991ms for pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace to be "Ready" ...
	E0127 12:35:45.382516    9948 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-659000" hosting pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000" has status "Ready":"False"
	I0127 12:35:45.382516    9948 pod_ready.go:79] waiting up to 4m0s for pod "etcd-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:45.382516    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-659000
	I0127 12:35:45.382516    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:45.382516    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:45.382516    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:45.385530    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:35:45.385530    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:45.385530    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:45.385530    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:45.385530    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:45 GMT
	I0127 12:35:45.385530    9948 round_trippers.go:580]     Audit-Id: 924fc52f-715d-406c-8d55-d13ff08e9907
	I0127 12:35:45.385530    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:45.385530    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:45.385530    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-659000","namespace":"kube-system","uid":"4c33fa42-51a7-4a7a-a497-cce80b8773d6","resourceVersion":"1864","creationTimestamp":"2025-01-27T12:35:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.29.198.106:2379","kubernetes.io/config.hash":"575cefa3aa8017dce576fa244e719a4e","kubernetes.io/config.mirror":"575cefa3aa8017dce576fa244e719a4e","kubernetes.io/config.seen":"2025-01-27T12:35:36.285837685Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:35:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6841 chars]
	I0127 12:35:45.385530    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:45.386526    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:45.386526    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:45.386526    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:45.388532    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:35:45.388532    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:45.388532    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:45.388532    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:45.388532    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:45.388532    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:45 GMT
	I0127 12:35:45.388532    9948 round_trippers.go:580]     Audit-Id: c6bdf9b9-e6c2-4dc2-b522-9096f82ded4f
	I0127 12:35:45.388532    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:45.388532    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:45.388532    9948 pod_ready.go:98] node "multinode-659000" hosting pod "etcd-multinode-659000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000" has status "Ready":"False"
	I0127 12:35:45.388532    9948 pod_ready.go:82] duration metric: took 6.0155ms for pod "etcd-multinode-659000" in "kube-system" namespace to be "Ready" ...
	E0127 12:35:45.388532    9948 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-659000" hosting pod "etcd-multinode-659000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000" has status "Ready":"False"
	I0127 12:35:45.388532    9948 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:45.388532    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-659000
	I0127 12:35:45.388532    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:45.388532    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:45.388532    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:45.392518    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:35:45.393105    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:45.393105    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:45.393105    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:45.393105    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:45 GMT
	I0127 12:35:45.393105    9948 round_trippers.go:580]     Audit-Id: 64fe31a7-8b9b-4130-8425-4e54162300e5
	I0127 12:35:45.393186    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:45.393186    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:45.393314    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-659000","namespace":"kube-system","uid":"8fbee94f-fd8f-4431-bd9f-b75d49cb19d4","resourceVersion":"1865","creationTimestamp":"2025-01-27T12:35:42Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.29.198.106:8443","kubernetes.io/config.hash":"b9fbd177058ba298cde2a92c4ef5c601","kubernetes.io/config.mirror":"b9fbd177058ba298cde2a92c4ef5c601","kubernetes.io/config.seen":"2025-01-27T12:35:36.265565317Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:35:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8293 chars]
	I0127 12:35:45.394150    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:45.394205    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:45.394205    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:45.394205    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:45.396899    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:35:45.396899    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:45.396899    9948 round_trippers.go:580]     Audit-Id: 91e9fd33-b24b-4878-9a12-02ed1f23a99f
	I0127 12:35:45.396899    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:45.396899    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:45.396899    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:45.396899    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:45.396899    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:45 GMT
	I0127 12:35:45.396899    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:45.397791    9948 pod_ready.go:98] node "multinode-659000" hosting pod "kube-apiserver-multinode-659000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000" has status "Ready":"False"
	I0127 12:35:45.397859    9948 pod_ready.go:82] duration metric: took 9.3272ms for pod "kube-apiserver-multinode-659000" in "kube-system" namespace to be "Ready" ...
	E0127 12:35:45.397859    9948 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-659000" hosting pod "kube-apiserver-multinode-659000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000" has status "Ready":"False"
	I0127 12:35:45.397916    9948 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:45.398061    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-659000
	I0127 12:35:45.398076    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:45.398076    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:45.398076    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:45.405836    9948 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 12:35:45.406308    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:45.406308    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:45.406308    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:45.406308    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:45 GMT
	I0127 12:35:45.406373    9948 round_trippers.go:580]     Audit-Id: 1db62ae8-7e70-4a97-8c92-de9d8c0020d8
	I0127 12:35:45.406373    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:45.406373    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:45.406404    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-659000","namespace":"kube-system","uid":"8be02f36-161c-44f3-b526-56db3b8a007a","resourceVersion":"1860","creationTimestamp":"2025-01-27T12:11:59Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4a14d0700eafa36dd3913955f2c0f839","kubernetes.io/config.mirror":"4a14d0700eafa36dd3913955f2c0f839","kubernetes.io/config.seen":"2025-01-27T12:11:59.106472767Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:11:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7737 chars]
	I0127 12:35:45.407449    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:45.407514    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:45.407514    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:45.407514    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:45.410044    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:35:45.410113    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:45.410130    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:45 GMT
	I0127 12:35:45.410130    9948 round_trippers.go:580]     Audit-Id: cfe352dd-face-4d67-b055-afb228e5515b
	I0127 12:35:45.410130    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:45.410130    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:45.410130    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:45.410130    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:45.410396    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:45.410396    9948 pod_ready.go:98] node "multinode-659000" hosting pod "kube-controller-manager-multinode-659000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000" has status "Ready":"False"
	I0127 12:35:45.410396    9948 pod_ready.go:82] duration metric: took 12.4288ms for pod "kube-controller-manager-multinode-659000" in "kube-system" namespace to be "Ready" ...
	E0127 12:35:45.410396    9948 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-659000" hosting pod "kube-controller-manager-multinode-659000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000" has status "Ready":"False"
	I0127 12:35:45.411150    9948 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-pjhc8" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:45.550275    9948 request.go:632] Waited for 139.1229ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pjhc8
	I0127 12:35:45.550653    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pjhc8
	I0127 12:35:45.550689    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:45.550689    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:45.550734    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:45.554184    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:35:45.554276    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:45.554276    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:45.554276    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:45.554276    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:45 GMT
	I0127 12:35:45.554276    9948 round_trippers.go:580]     Audit-Id: 6abb2687-6d94-44fb-9ad9-c29c2e602707
	I0127 12:35:45.554276    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:45.554276    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:45.554606    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pjhc8","generateName":"kube-proxy-","namespace":"kube-system","uid":"ddb6698c-b83d-4a49-9672-c894e87cbb66","resourceVersion":"626","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d88eb776-b464-4f2b-8140-44249610a7fa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d88eb776-b464-4f2b-8140-44249610a7fa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6207 chars]
	I0127 12:35:45.750312    9948 request.go:632] Waited for 195.5144ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.198.106:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:35:45.750312    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:35:45.750312    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:45.750312    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:45.750312    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:45.750312    9948 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0127 12:35:45.750312    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:45.750312    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:45.750312    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:45 GMT
	I0127 12:35:45.750312    9948 round_trippers.go:580]     Audit-Id: 344a35cd-63ed-4749-9075-1e32d1280e98
	I0127 12:35:45.750312    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:45.750312    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:45.750312    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:45.750312    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"1482","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3828 chars]
	I0127 12:35:45.750312    9948 pod_ready.go:93] pod "kube-proxy-pjhc8" in "kube-system" namespace has status "Ready":"True"
	I0127 12:35:45.750312    9948 pod_ready.go:82] duration metric: took 339.1583ms for pod "kube-proxy-pjhc8" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:45.750312    9948 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-s46mv" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:45.954768    9948 request.go:632] Waited for 204.4542ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s46mv
	I0127 12:35:45.955070    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s46mv
	I0127 12:35:45.955070    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:45.955070    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:45.955070    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:45.958848    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:35:45.958848    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:45.958848    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:45.958848    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:45.958848    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:45.958848    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:45 GMT
	I0127 12:35:45.958991    9948 round_trippers.go:580]     Audit-Id: 6620372a-f334-4139-8de8-80e58730afab
	I0127 12:35:45.958991    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:45.959373    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s46mv","generateName":"kube-proxy-","namespace":"kube-system","uid":"ae3b8daf-d674-4cfe-8652-cb5ff6ba8615","resourceVersion":"1898","creationTimestamp":"2025-01-27T12:12:03Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d88eb776-b464-4f2b-8140-44249610a7fa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d88eb776-b464-4f2b-8140-44249610a7fa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6405 chars]
	I0127 12:35:46.150608    9948 request.go:632] Waited for 190.3966ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:46.150608    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:46.150608    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:46.150608    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:46.150608    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:46.155647    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:35:46.155713    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:46.155713    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:46.155713    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:46 GMT
	I0127 12:35:46.155713    9948 round_trippers.go:580]     Audit-Id: e1ea1884-aebc-4345-acd4-b6e046d869be
	I0127 12:35:46.155771    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:46.155771    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:46.155771    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:46.156114    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:46.157022    9948 pod_ready.go:98] node "multinode-659000" hosting pod "kube-proxy-s46mv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000" has status "Ready":"False"
	I0127 12:35:46.157022    9948 pod_ready.go:82] duration metric: took 406.7054ms for pod "kube-proxy-s46mv" in "kube-system" namespace to be "Ready" ...
	E0127 12:35:46.157128    9948 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-659000" hosting pod "kube-proxy-s46mv" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000" has status "Ready":"False"
	I0127 12:35:46.157128    9948 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-sk5js" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:46.349932    9948 request.go:632] Waited for 192.5365ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sk5js
	I0127 12:35:46.349932    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sk5js
	I0127 12:35:46.349932    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:46.349932    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:46.349932    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:46.354742    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:46.354742    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:46.354742    9948 round_trippers.go:580]     Audit-Id: f16df18b-c2d8-4639-9942-d4bfdad6529b
	I0127 12:35:46.354742    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:46.354742    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:46.354742    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:46.354742    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:46.354742    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:46 GMT
	I0127 12:35:46.354742    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sk5js","generateName":"kube-proxy-","namespace":"kube-system","uid":"ba679e1d-713c-4bd4-b267-2b887c1ac4df","resourceVersion":"1793","creationTimestamp":"2025-01-27T12:19:54Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d88eb776-b464-4f2b-8140-44249610a7fa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:19:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d88eb776-b464-4f2b-8140-44249610a7fa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6428 chars]
	I0127 12:35:46.549498    9948 request.go:632] Waited for 193.3007ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.198.106:8443/api/v1/nodes/multinode-659000-m03
	I0127 12:35:46.549978    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000-m03
	I0127 12:35:46.550092    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:46.550143    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:46.550173    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:46.554344    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:46.554471    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:46.554471    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:46.554471    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:46 GMT
	I0127 12:35:46.554471    9948 round_trippers.go:580]     Audit-Id: e35ee77c-59b2-4228-b7f0-5050d7835f01
	I0127 12:35:46.554471    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:46.554471    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:46.554471    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:46.554696    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m03","uid":"0516f5fa-16ad-40aa-9616-01d098e46466","resourceVersion":"1895","creationTimestamp":"2025-01-27T12:31:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_31_04_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:31:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4303 chars]
	I0127 12:35:46.554899    9948 pod_ready.go:98] node "multinode-659000-m03" hosting pod "kube-proxy-sk5js" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000-m03" has status "Ready":"Unknown"
	I0127 12:35:46.554899    9948 pod_ready.go:82] duration metric: took 397.7673ms for pod "kube-proxy-sk5js" in "kube-system" namespace to be "Ready" ...
	E0127 12:35:46.554899    9948 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-659000-m03" hosting pod "kube-proxy-sk5js" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000-m03" has status "Ready":"Unknown"
	I0127 12:35:46.554899    9948 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:35:46.749489    9948 request.go:632] Waited for 194.0588ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-659000
	I0127 12:35:46.749489    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-659000
	I0127 12:35:46.749489    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:46.749489    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:46.749489    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:46.754326    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:46.754326    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:46.754326    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:46 GMT
	I0127 12:35:46.754326    9948 round_trippers.go:580]     Audit-Id: f9d03568-2030-408b-b7e0-db35a0757255
	I0127 12:35:46.754326    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:46.754326    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:46.754326    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:46.754326    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:46.754326    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-659000","namespace":"kube-system","uid":"52b91964-a331-4925-9e07-c8df32b4176d","resourceVersion":"1862","creationTimestamp":"2025-01-27T12:11:58Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e6c90fc43fa6c0754218ff1c4162045d","kubernetes.io/config.mirror":"e6c90fc43fa6c0754218ff1c4162045d","kubernetes.io/config.seen":"2025-01-27T12:11:51.419790825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:11:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5812 chars]
	I0127 12:35:46.949752    9948 request.go:632] Waited for 194.3032ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:46.949752    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:46.949752    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:46.949752    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:46.949752    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:46.953238    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:35:46.954115    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:46.954115    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:46.954115    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:46.954115    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:46 GMT
	I0127 12:35:46.954115    9948 round_trippers.go:580]     Audit-Id: 1229297d-0ae3-4415-bc24-02245312e592
	I0127 12:35:46.954115    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:46.954115    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:46.954553    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:46.954935    9948 pod_ready.go:98] node "multinode-659000" hosting pod "kube-scheduler-multinode-659000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000" has status "Ready":"False"
	I0127 12:35:46.955049    9948 pod_ready.go:82] duration metric: took 400.146ms for pod "kube-scheduler-multinode-659000" in "kube-system" namespace to be "Ready" ...
	E0127 12:35:46.955049    9948 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-659000" hosting pod "kube-scheduler-multinode-659000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000" has status "Ready":"False"
	I0127 12:35:46.955049    9948 pod_ready.go:39] duration metric: took 1.5914889s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:35:46.955049    9948 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0127 12:35:46.974171    9948 command_runner.go:130] > -16
	I0127 12:35:46.974171    9948 ops.go:34] apiserver oom_adj: -16
	I0127 12:35:46.974171    9948 kubeadm.go:597] duration metric: took 13.029694s to restartPrimaryControlPlane
	I0127 12:35:46.974352    9948 kubeadm.go:394] duration metric: took 13.0914766s to StartCluster
	I0127 12:35:46.974352    9948 settings.go:142] acquiring lock: {Name:mk6b97e58c5fe8f88c3b8025e136ed13b1b7453d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:35:46.974539    9948 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0127 12:35:46.976373    9948 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\kubeconfig: {Name:mk4f4c590fd703778dedd3b8c3d630c561af8c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 12:35:46.978521    9948 start.go:235] Will wait 6m0s for node &{Name: IP:172.29.198.106 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0127 12:35:46.978521    9948 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0127 12:35:46.979304    9948 config.go:182] Loaded profile config "multinode-659000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:35:46.981962    9948 out.go:177] * Enabled addons: 
	I0127 12:35:46.983988    9948 out.go:177] * Verifying Kubernetes components...
	I0127 12:35:46.990038    9948 addons.go:514] duration metric: took 11.5167ms for enable addons: enabled=[]
	I0127 12:35:47.006335    9948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:35:47.275775    9948 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0127 12:35:47.301208    9948 node_ready.go:35] waiting up to 6m0s for node "multinode-659000" to be "Ready" ...
	I0127 12:35:47.301439    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:47.301474    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:47.301474    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:47.301474    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:47.308418    9948 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:35:47.308418    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:47.308418    9948 round_trippers.go:580]     Audit-Id: 36f9e360-021a-4313-9fc3-519d46bbe416
	I0127 12:35:47.308418    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:47.308418    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:47.308418    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:47.308418    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:47.308418    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:47 GMT
	I0127 12:35:47.309064    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:47.801954    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:47.801954    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:47.801954    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:47.801954    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:47.805038    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:35:47.805108    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:47.805108    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:47.805108    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:47.805108    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:47.805108    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:47 GMT
	I0127 12:35:47.805167    9948 round_trippers.go:580]     Audit-Id: 30e51e57-2d57-49f4-8aaa-996ae7dc9801
	I0127 12:35:47.805167    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:47.805499    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:48.301588    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:48.301588    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:48.301588    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:48.301588    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:48.306532    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:48.306532    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:48.306532    9948 round_trippers.go:580]     Audit-Id: 1d241cdd-b2cc-40fa-a217-e3f0106e18b1
	I0127 12:35:48.306716    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:48.306716    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:48.306768    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:48.306768    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:48.306805    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:48 GMT
	I0127 12:35:48.307176    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:48.801713    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:48.801713    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:48.801713    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:48.801713    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:48.804720    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:35:48.804720    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:48.804720    9948 round_trippers.go:580]     Audit-Id: 79445d3b-cf3a-4375-8f8a-24844786a835
	I0127 12:35:48.804720    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:48.804720    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:48.804720    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:48.804720    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:48.804720    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:48 GMT
	I0127 12:35:48.805872    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:49.302141    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:49.302141    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:49.302141    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:49.302141    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:49.306596    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:49.307320    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:49.307320    9948 round_trippers.go:580]     Audit-Id: e7787a8c-3bd2-4d28-9ebd-b4dc25085a20
	I0127 12:35:49.307320    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:49.307320    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:49.307320    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:49.307320    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:49.307320    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:49 GMT
	I0127 12:35:49.307791    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:49.308361    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:35:49.801327    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:49.801327    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:49.801327    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:49.801327    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:49.805682    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:49.805788    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:49.805788    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:49.805788    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:49.805788    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:49 GMT
	I0127 12:35:49.805867    9948 round_trippers.go:580]     Audit-Id: d70d3060-48f1-4777-b3b6-e891f3efb479
	I0127 12:35:49.805867    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:49.805867    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:49.806252    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:50.301397    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:50.301397    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:50.301397    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:50.301397    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:50.306588    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:35:50.306588    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:50.306588    9948 round_trippers.go:580]     Audit-Id: ff352193-065f-4a51-b045-aa96c204d770
	I0127 12:35:50.306588    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:50.306588    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:50.306588    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:50.306588    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:50.306588    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:50 GMT
	I0127 12:35:50.307025    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:50.802132    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:50.802132    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:50.802132    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:50.802132    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:50.806350    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:50.806460    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:50.806460    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:50 GMT
	I0127 12:35:50.806460    9948 round_trippers.go:580]     Audit-Id: 87103fb1-ed34-468e-8812-b0acf460fe60
	I0127 12:35:50.806460    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:50.806546    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:50.806546    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:50.806546    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:50.806909    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:51.301795    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:51.301795    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:51.301795    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:51.301795    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:51.307328    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:35:51.307419    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:51.307419    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:51.307419    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:51.307510    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:51 GMT
	I0127 12:35:51.307510    9948 round_trippers.go:580]     Audit-Id: 86bb1816-8895-4cc6-9f39-2f92f390dc54
	I0127 12:35:51.307510    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:51.307510    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:51.307789    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:51.308487    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:35:51.801848    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:51.801960    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:51.801960    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:51.801960    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:51.808641    9948 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:35:51.808641    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:51.808641    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:51.808641    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:51 GMT
	I0127 12:35:51.808641    9948 round_trippers.go:580]     Audit-Id: 6582cb10-6afd-4ef4-83f8-be93bf836294
	I0127 12:35:51.808641    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:51.808641    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:51.808641    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:51.809373    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:52.301978    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:52.301978    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:52.302083    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:52.302083    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:52.306904    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:52.306904    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:52.307025    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:52.307025    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:52 GMT
	I0127 12:35:52.307025    9948 round_trippers.go:580]     Audit-Id: 0376b027-aef3-4f71-b932-6b82b572adaa
	I0127 12:35:52.307025    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:52.307025    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:52.307025    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:52.308064    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:52.801486    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:52.801486    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:52.801486    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:52.801486    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:52.806054    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:52.806054    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:52.806054    9948 round_trippers.go:580]     Audit-Id: c25c342c-6a13-4864-b4b4-124b54c50e02
	I0127 12:35:52.806054    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:52.806054    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:52.806170    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:52.806170    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:52.806170    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:52 GMT
	I0127 12:35:52.806703    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:53.301626    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:53.301626    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:53.301626    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:53.301626    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:53.305819    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:53.306674    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:53.306674    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:53.306674    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:53.306674    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:53.306747    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:53.306747    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:53 GMT
	I0127 12:35:53.306747    9948 round_trippers.go:580]     Audit-Id: 79527147-335f-4c85-961d-9af5c797b5f9
	I0127 12:35:53.307003    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:53.801570    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:53.801570    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:53.801570    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:53.801570    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:53.806616    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:35:53.806718    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:53.806718    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:53.806718    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:53.806718    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:53.806718    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:53 GMT
	I0127 12:35:53.806718    9948 round_trippers.go:580]     Audit-Id: e5af9b4d-0a2b-467f-9b30-4154a06cb3b3
	I0127 12:35:53.806718    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:53.807354    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:53.808028    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:35:54.301636    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:54.301636    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:54.301636    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:54.301636    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:54.307046    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:35:54.307046    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:54.307046    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:54 GMT
	I0127 12:35:54.307046    9948 round_trippers.go:580]     Audit-Id: 49e45828-a9e3-45f4-af06-f03cc8beaa7b
	I0127 12:35:54.307046    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:54.307046    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:54.307046    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:54.307046    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:54.307934    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:54.802073    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:54.802073    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:54.802197    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:54.802197    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:54.806091    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:35:54.806221    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:54.806221    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:54.806221    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:54.806274    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:54.806274    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:54.806274    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:54 GMT
	I0127 12:35:54.806274    9948 round_trippers.go:580]     Audit-Id: 34597499-8ff1-4310-beb5-7d428276851a
	I0127 12:35:54.806274    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:55.302465    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:55.302567    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:55.302567    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:55.302567    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:55.308084    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:35:55.308084    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:55.308084    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:55 GMT
	I0127 12:35:55.308084    9948 round_trippers.go:580]     Audit-Id: 93d49b61-09ce-41fc-842c-926a7eac715c
	I0127 12:35:55.308084    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:55.308192    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:55.308192    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:55.308192    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:55.308422    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:55.801620    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:55.802194    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:55.802194    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:55.802194    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:55.806794    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:55.807361    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:55.807361    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:55 GMT
	I0127 12:35:55.807361    9948 round_trippers.go:580]     Audit-Id: dd9b87a4-8a74-4061-9973-e41d1f72df58
	I0127 12:35:55.807361    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:55.807361    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:55.807361    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:55.807361    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:55.807699    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:56.302673    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:56.302673    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:56.302673    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:56.302673    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:56.306892    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:56.306892    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:56.306999    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:56.306999    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:56.306999    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:56 GMT
	I0127 12:35:56.307032    9948 round_trippers.go:580]     Audit-Id: af2120f8-a0ef-4f1b-ba51-156eb95fa991
	I0127 12:35:56.307032    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:56.307032    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:56.307065    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:56.307686    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:35:56.801426    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:56.801426    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:56.801426    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:56.801426    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:56.805434    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:56.805434    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:56.805434    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:56.805434    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:56.805434    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:56.805434    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:56.805434    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:56 GMT
	I0127 12:35:56.805434    9948 round_trippers.go:580]     Audit-Id: 7120e14b-84f6-42d4-b4b3-4a453569483d
	I0127 12:35:56.805434    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:57.302348    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:57.302348    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:57.302467    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:57.302467    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:57.310152    9948 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 12:35:57.310181    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:57.310181    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:57.310181    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:57.310181    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:57.310271    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:57 GMT
	I0127 12:35:57.310271    9948 round_trippers.go:580]     Audit-Id: f7b94c7a-4684-4852-92dd-c334c3237005
	I0127 12:35:57.310271    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:57.311115    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:57.801639    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:57.801639    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:57.801639    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:57.801639    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:57.808000    9948 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:35:57.808726    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:57.808726    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:57 GMT
	I0127 12:35:57.808726    9948 round_trippers.go:580]     Audit-Id: 13d61e92-67a9-4701-ad50-2e94c15e8331
	I0127 12:35:57.808726    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:57.808726    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:57.808726    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:57.808774    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:57.809458    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:58.301935    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:58.301935    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:58.301935    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:58.301935    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:58.309691    9948 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 12:35:58.309691    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:58.309691    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:58.309691    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:58 GMT
	I0127 12:35:58.309691    9948 round_trippers.go:580]     Audit-Id: d99ab032-a5a5-40f7-9cc5-4971d572177f
	I0127 12:35:58.309691    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:58.309691    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:58.309875    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:58.309986    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:58.310851    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:35:58.802180    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:58.802180    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:58.802180    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:58.802180    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:58.806922    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:58.806922    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:58.806922    9948 round_trippers.go:580]     Audit-Id: 4483bb59-13a1-493e-8017-205f017898b7
	I0127 12:35:58.806922    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:58.806922    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:58.806922    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:58.806922    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:58.806922    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:58 GMT
	I0127 12:35:58.808310    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:59.302120    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:59.302120    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:59.302120    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:59.302120    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:59.306432    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:59.307084    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:59.307084    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:59.307084    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:59.307084    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:59 GMT
	I0127 12:35:59.307084    9948 round_trippers.go:580]     Audit-Id: 28390c7d-c3e2-4f19-9a3c-5c0f82fa4169
	I0127 12:35:59.307084    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:59.307084    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:59.307432    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:35:59.802177    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:35:59.802177    9948 round_trippers.go:469] Request Headers:
	I0127 12:35:59.802177    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:35:59.802177    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:35:59.806775    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:35:59.806775    9948 round_trippers.go:577] Response Headers:
	I0127 12:35:59.807200    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:35:59.807200    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:35:59 GMT
	I0127 12:35:59.807200    9948 round_trippers.go:580]     Audit-Id: c592dd70-08ea-478b-8b6c-048056a610d7
	I0127 12:35:59.807200    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:35:59.807200    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:35:59.807200    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:35:59.807592    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:00.301394    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:00.301394    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:00.301394    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:00.301394    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:00.306361    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:00.306426    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:00.306426    9948 round_trippers.go:580]     Audit-Id: 262dde43-dda4-4826-aa43-36de1afc877a
	I0127 12:36:00.306426    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:00.306426    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:00.306487    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:00.306487    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:00.306487    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:00 GMT
	I0127 12:36:00.306676    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:00.802794    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:00.802794    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:00.802962    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:00.802962    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:00.807128    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:00.807128    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:00.807128    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:00.807128    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:00.807128    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:00.807128    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:00 GMT
	I0127 12:36:00.807128    9948 round_trippers.go:580]     Audit-Id: 0ae6b675-41fb-42f3-a026-8f54dc6d6141
	I0127 12:36:00.807128    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:00.807919    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:00.808510    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:36:01.302478    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:01.302551    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:01.302551    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:01.302551    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:01.306345    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:01.307277    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:01.307277    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:01.307277    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:01.307277    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:01 GMT
	I0127 12:36:01.307277    9948 round_trippers.go:580]     Audit-Id: 7224d9ba-5aa3-4833-a81e-9649baae8fb4
	I0127 12:36:01.307381    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:01.307381    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:01.307381    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:01.802376    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:01.802376    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:01.802376    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:01.802376    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:01.806436    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:01.806436    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:01.806493    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:01.806493    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:01.806493    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:01.806493    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:01 GMT
	I0127 12:36:01.806493    9948 round_trippers.go:580]     Audit-Id: 0d9da99c-2904-4f4f-84ec-3730a00d79fe
	I0127 12:36:01.806493    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:01.807287    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:02.301603    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:02.301603    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:02.301603    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:02.301603    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:02.305179    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:02.305179    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:02.305395    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:02.305395    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:02.305395    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:02.305395    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:02 GMT
	I0127 12:36:02.305395    9948 round_trippers.go:580]     Audit-Id: 7a507d30-eae2-493c-a3d5-613cf8553d6e
	I0127 12:36:02.305395    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:02.305623    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:02.802639    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:02.802731    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:02.802731    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:02.802731    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:02.808465    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:02.808492    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:02.808492    9948 round_trippers.go:580]     Audit-Id: 1f04222b-76f3-44e0-900e-ac6918d3e378
	I0127 12:36:02.808492    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:02.808492    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:02.808541    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:02.808541    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:02.808541    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:02 GMT
	I0127 12:36:02.810083    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:02.810486    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:36:03.301967    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:03.301967    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:03.301967    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:03.301967    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:03.306638    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:03.306638    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:03.306638    9948 round_trippers.go:580]     Audit-Id: 9e94bbdb-a993-40f2-99b3-761e59a2d333
	I0127 12:36:03.306638    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:03.306638    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:03.306638    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:03.306638    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:03.306638    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:03 GMT
	I0127 12:36:03.306978    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:03.801941    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:03.802005    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:03.802005    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:03.802005    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:03.806897    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:03.807004    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:03.807004    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:03.807004    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:03.807004    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:03.807004    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:03.807004    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:03 GMT
	I0127 12:36:03.807004    9948 round_trippers.go:580]     Audit-Id: d1f1551a-35b5-4082-b8fa-7e3e05edc0b8
	I0127 12:36:03.807275    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:04.302050    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:04.302050    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:04.302050    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:04.302050    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:04.307985    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:04.308118    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:04.308118    9948 round_trippers.go:580]     Audit-Id: 81173ab7-8afd-471f-898a-bf9ade4902b2
	I0127 12:36:04.308118    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:04.308118    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:04.308118    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:04.308118    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:04.308118    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:04 GMT
	I0127 12:36:04.308196    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:04.801902    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:04.801902    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:04.801902    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:04.801902    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:04.807155    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:04.807155    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:04.807155    9948 round_trippers.go:580]     Audit-Id: 349a1595-f1d4-4315-9ffb-4a65b00557b1
	I0127 12:36:04.807155    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:04.807155    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:04.807155    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:04.807155    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:04.807262    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:04 GMT
	I0127 12:36:04.807679    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:05.302030    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:05.302030    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:05.302030    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:05.302030    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:05.306743    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:05.306743    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:05.306743    9948 round_trippers.go:580]     Audit-Id: 31444957-8e84-496f-ad90-8f51aea870f7
	I0127 12:36:05.306743    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:05.306743    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:05.306743    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:05.306957    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:05.306957    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:05 GMT
	I0127 12:36:05.307246    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:05.307690    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:36:05.802683    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:05.802683    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:05.802683    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:05.802817    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:05.807163    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:05.807163    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:05.807163    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:05.807163    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:05 GMT
	I0127 12:36:05.807163    9948 round_trippers.go:580]     Audit-Id: b78b8e0c-5b64-48bf-98e4-89a9298d378c
	I0127 12:36:05.807163    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:05.807163    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:05.807163    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:05.807163    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:06.302723    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:06.302753    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:06.302818    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:06.302847    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:06.307621    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:06.307708    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:06.307708    9948 round_trippers.go:580]     Audit-Id: 501cde61-d8b3-4f85-b17a-7fec455c4a59
	I0127 12:36:06.307708    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:06.307708    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:06.307784    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:06.307784    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:06.307784    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:06 GMT
	I0127 12:36:06.308036    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:06.802987    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:06.802987    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:06.802987    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:06.802987    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:06.808013    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:06.808013    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:06.808013    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:06.808153    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:06.808153    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:06 GMT
	I0127 12:36:06.808153    9948 round_trippers.go:580]     Audit-Id: 3d70d454-a8de-49ff-a85a-7b5369e73188
	I0127 12:36:06.808153    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:06.808153    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:06.808466    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:07.302240    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:07.302240    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:07.302240    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:07.302240    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:07.307267    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:07.307313    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:07.307313    9948 round_trippers.go:580]     Audit-Id: b97334de-dbe2-4fc4-bc45-175918d6ff31
	I0127 12:36:07.307363    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:07.307363    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:07.307363    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:07.307363    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:07.307363    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:07 GMT
	I0127 12:36:07.307537    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:07.802403    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:07.802403    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:07.802403    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:07.802403    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:07.806873    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:07.806905    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:07.806905    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:07.806905    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:07 GMT
	I0127 12:36:07.806905    9948 round_trippers.go:580]     Audit-Id: cb80eb35-b5f1-401d-b1bd-9007c3be701d
	I0127 12:36:07.806905    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:07.806905    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:07.806905    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:07.807305    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:07.807305    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:36:08.301529    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:08.301529    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:08.301529    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:08.301529    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:08.306666    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:08.306747    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:08.306747    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:08.306747    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:08.306747    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:08 GMT
	I0127 12:36:08.306747    9948 round_trippers.go:580]     Audit-Id: 50962b7a-c0c0-43e1-a768-816272b98ac7
	I0127 12:36:08.306747    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:08.306820    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:08.307175    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:08.802483    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:08.802483    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:08.802483    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:08.802483    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:08.807264    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:08.807264    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:08.807338    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:08.807338    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:08 GMT
	I0127 12:36:08.807338    9948 round_trippers.go:580]     Audit-Id: 1076b5d8-85d8-4d1b-85b6-311915086cad
	I0127 12:36:08.807338    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:08.807338    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:08.807338    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:08.807736    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:09.301610    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:09.301610    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:09.301610    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:09.301610    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:09.306594    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:09.306716    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:09.306816    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:09.306816    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:09.306816    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:09 GMT
	I0127 12:36:09.306865    9948 round_trippers.go:580]     Audit-Id: dfe9c8e4-d7b3-482e-a39b-bf6a16659349
	I0127 12:36:09.306865    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:09.306865    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:09.307062    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:09.802056    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:09.802056    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:09.802056    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:09.802056    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:09.808430    9948 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:36:09.808430    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:09.808430    9948 round_trippers.go:580]     Audit-Id: 2a691a54-2fb2-4181-94e4-4d042a53e533
	I0127 12:36:09.808430    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:09.808430    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:09.808430    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:09.808430    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:09.808430    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:09 GMT
	I0127 12:36:09.808430    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:09.809156    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:36:10.301504    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:10.301504    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:10.301504    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:10.301504    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:10.306748    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:10.306748    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:10.306748    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:10.306748    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:10.306748    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:10.306748    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:10.306748    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:10 GMT
	I0127 12:36:10.306748    9948 round_trippers.go:580]     Audit-Id: bf9e319d-37f6-48e0-8e9a-a47bcd455abd
	I0127 12:36:10.306748    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:10.801560    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:10.801560    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:10.801560    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:10.801560    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:10.806525    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:10.806525    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:10.806525    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:10.806525    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:10.806525    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:10.806525    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:10.806525    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:10 GMT
	I0127 12:36:10.806525    9948 round_trippers.go:580]     Audit-Id: aef7fd32-1085-49e2-a197-c3be119a43e2
	I0127 12:36:10.806750    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:11.302078    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:11.302585    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:11.302585    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:11.302585    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:11.307435    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:11.307435    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:11.307604    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:11.307604    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:11 GMT
	I0127 12:36:11.307604    9948 round_trippers.go:580]     Audit-Id: 6687c558-d01a-428a-a22d-5dee880e730a
	I0127 12:36:11.307604    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:11.307604    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:11.307604    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:11.307877    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:11.801959    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:11.801959    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:11.801959    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:11.801959    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:11.807415    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:11.807473    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:11.807473    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:11.807473    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:11.807473    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:11.807473    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:11 GMT
	I0127 12:36:11.807473    9948 round_trippers.go:580]     Audit-Id: 8c919ebd-dc8d-42d4-b8fa-38bdf4307836
	I0127 12:36:11.807473    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:11.807791    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:12.301512    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:12.301512    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:12.301512    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:12.301512    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:12.305325    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:12.305325    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:12.305428    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:12.305447    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:12.305447    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:12.305447    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:12 GMT
	I0127 12:36:12.305447    9948 round_trippers.go:580]     Audit-Id: 64aa28be-8592-41ab-873c-a0ef7d93f091
	I0127 12:36:12.305447    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:12.305695    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:12.306271    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:36:12.801928    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:12.802508    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:12.802508    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:12.802508    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:12.806912    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:12.806912    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:12.806912    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:12.806912    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:12.806912    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:12.806912    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:12.806912    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:12 GMT
	I0127 12:36:12.806912    9948 round_trippers.go:580]     Audit-Id: 3b893e8d-b38c-4bd6-921a-03668ef2bd09
	I0127 12:36:12.807369    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:13.301830    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:13.301830    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:13.301830    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:13.301830    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:13.306935    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:13.307663    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:13.307663    9948 round_trippers.go:580]     Audit-Id: 047f9ed5-c7bc-4f3a-9dd6-2a1a588a002e
	I0127 12:36:13.307663    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:13.307663    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:13.307663    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:13.307663    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:13.307663    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:13 GMT
	I0127 12:36:13.307757    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:13.801761    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:13.801761    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:13.801761    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:13.801761    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:13.807176    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:13.807176    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:13.807394    9948 round_trippers.go:580]     Audit-Id: 25f6a7db-40aa-4d5f-981f-2e36e9132c78
	I0127 12:36:13.807394    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:13.807394    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:13.807394    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:13.807394    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:13.807394    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:13 GMT
	I0127 12:36:13.807394    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:14.302157    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:14.302157    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:14.302157    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:14.302157    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:14.307210    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:14.307210    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:14.307210    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:14.307210    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:14.307210    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:14.307210    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:14 GMT
	I0127 12:36:14.307210    9948 round_trippers.go:580]     Audit-Id: 1c3c4965-5dd5-4a9b-91f3-8bb34ba25b22
	I0127 12:36:14.307210    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:14.307982    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:14.309513    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:36:14.801629    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:14.801629    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:14.801629    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:14.801629    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:14.806808    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:14.806874    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:14.806874    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:14 GMT
	I0127 12:36:14.806874    9948 round_trippers.go:580]     Audit-Id: 533da9ef-0c00-4280-a65c-bbca8f1dabc8
	I0127 12:36:14.806963    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:14.807036    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:14.807036    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:14.807036    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:14.807374    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:15.302094    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:15.302094    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:15.302094    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:15.302094    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:15.307048    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:15.307048    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:15.307048    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:15.307048    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:15.307048    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:15.307048    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:15.307048    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:15 GMT
	I0127 12:36:15.307048    9948 round_trippers.go:580]     Audit-Id: b39dcd86-9358-4b13-9a4d-bed4ec175ab2
	I0127 12:36:15.307048    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:15.802663    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:15.802663    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:15.802663    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:15.802663    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:15.807456    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:15.807584    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:15.807584    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:15 GMT
	I0127 12:36:15.807584    9948 round_trippers.go:580]     Audit-Id: 33cb5268-1248-46c5-8e2c-ee2ac34f3f17
	I0127 12:36:15.807584    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:15.807584    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:15.807584    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:15.807584    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:15.807728    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:16.302685    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:16.302685    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:16.302685    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:16.302685    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:16.307209    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:16.307209    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:16.307209    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:16.307209    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:16.307300    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:16.307300    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:16 GMT
	I0127 12:36:16.307300    9948 round_trippers.go:580]     Audit-Id: 026bbfa3-af7c-42f1-809e-d3987da29eb4
	I0127 12:36:16.307300    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:16.307527    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:16.802860    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:16.802934    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:16.802934    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:16.802934    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:16.807148    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:16.807206    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:16.807206    9948 round_trippers.go:580]     Audit-Id: a4c6d309-0927-42c2-a583-ff2f1cde7443
	I0127 12:36:16.807206    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:16.807206    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:16.807206    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:16.807206    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:16.807206    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:16 GMT
	I0127 12:36:16.807739    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:16.808247    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:36:17.302962    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:17.302962    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:17.302962    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:17.302962    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:17.308068    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:17.308068    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:17.308209    9948 round_trippers.go:580]     Audit-Id: 7f38be74-d83e-4adf-81cd-24ccf7814720
	I0127 12:36:17.308209    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:17.308209    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:17.308209    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:17.308209    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:17.308209    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:17 GMT
	I0127 12:36:17.308497    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:17.801642    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:17.801642    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:17.801642    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:17.802109    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:17.805820    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:17.805888    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:17.805888    9948 round_trippers.go:580]     Audit-Id: 7bf90979-82e8-4e43-a5fd-63cbe0045643
	I0127 12:36:17.805966    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:17.805966    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:17.805966    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:17.805966    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:17.805966    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:17 GMT
	I0127 12:36:17.806350    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:18.302084    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:18.302084    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:18.302084    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:18.302084    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:18.305911    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:18.305977    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:18.305977    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:18 GMT
	I0127 12:36:18.305977    9948 round_trippers.go:580]     Audit-Id: 5946fc9b-60fe-4a7f-87cc-7376ff4ab8c3
	I0127 12:36:18.305977    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:18.305977    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:18.305977    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:18.306047    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:18.307350    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:18.801848    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:18.801848    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:18.801848    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:18.801848    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:18.805775    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:18.805775    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:18.805775    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:18.805775    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:18.805775    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:18.805775    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:18 GMT
	I0127 12:36:18.805775    9948 round_trippers.go:580]     Audit-Id: 8fc4aecf-8db5-4c36-92b1-76eb5497c630
	I0127 12:36:18.805775    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:18.806227    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:19.301603    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:19.301603    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:19.301603    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:19.301603    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:19.305828    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:19.305828    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:19.305889    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:19.305889    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:19.305889    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:19 GMT
	I0127 12:36:19.305889    9948 round_trippers.go:580]     Audit-Id: 28881e53-ad10-4ede-aae1-d4aa1d2448dd
	I0127 12:36:19.305889    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:19.305889    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:19.307137    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:19.307137    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:36:19.802099    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:19.802099    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:19.802099    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:19.802099    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:19.806326    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:19.806326    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:19.806392    9948 round_trippers.go:580]     Audit-Id: 3a515eb8-11fd-4385-9c2e-7093ce7a2a6e
	I0127 12:36:19.806392    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:19.806392    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:19.806392    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:19.806392    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:19.806392    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:19 GMT
	I0127 12:36:19.807014    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:20.301696    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:20.301696    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:20.301696    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:20.301696    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:20.306489    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:20.306925    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:20.306925    9948 round_trippers.go:580]     Audit-Id: 4d8f5af0-9f2b-43df-81a4-73e3ba345c7e
	I0127 12:36:20.306925    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:20.306925    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:20.306925    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:20.306925    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:20.306925    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:20 GMT
	I0127 12:36:20.307265    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:20.802233    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:20.802233    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:20.802233    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:20.802233    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:20.806437    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:20.806546    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:20.806546    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:20.806546    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:20.806546    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:20 GMT
	I0127 12:36:20.806546    9948 round_trippers.go:580]     Audit-Id: aa4fb124-4b0c-4e9f-84e0-d1b36701cb2a
	I0127 12:36:20.806546    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:20.806546    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:20.806821    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:21.301816    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:21.301816    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:21.301816    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:21.301816    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:21.306056    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:21.306056    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:21.306056    9948 round_trippers.go:580]     Audit-Id: 3662c43c-7b6c-428b-9e56-0d07e57147c4
	I0127 12:36:21.306056    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:21.306056    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:21.306056    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:21.306056    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:21.306056    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:21 GMT
	I0127 12:36:21.306394    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:21.801640    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:21.801640    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:21.801640    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:21.801640    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:21.805718    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:21.805789    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:21.805789    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:21.805789    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:21 GMT
	I0127 12:36:21.805789    9948 round_trippers.go:580]     Audit-Id: 97231ee4-b0d1-4c65-87ed-465f5bb47979
	I0127 12:36:21.805789    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:21.805856    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:21.805856    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:21.806193    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:21.806661    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:36:22.301665    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:22.301665    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:22.301665    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:22.301665    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:22.306745    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:22.306745    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:22.306745    9948 round_trippers.go:580]     Audit-Id: fb669eac-56b3-4e9b-afd7-4bddac9303b0
	I0127 12:36:22.306745    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:22.306745    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:22.306871    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:22.306871    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:22.306871    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:22 GMT
	I0127 12:36:22.307200    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:22.801710    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:22.802381    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:22.802381    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:22.802381    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:22.806038    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:22.806153    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:22.806153    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:22.806153    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:22.806153    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:22.806153    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:22.806153    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:22 GMT
	I0127 12:36:22.806153    9948 round_trippers.go:580]     Audit-Id: ae0c2a71-f5ad-4a6e-80d1-51ce243bfc64
	I0127 12:36:22.806492    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:23.302579    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:23.302665    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:23.302665    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:23.302665    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:23.307014    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:23.307099    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:23.307099    9948 round_trippers.go:580]     Audit-Id: 7a125feb-c987-4db6-90c5-c45a848e9cff
	I0127 12:36:23.307099    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:23.307099    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:23.307099    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:23.307099    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:23.307099    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:23 GMT
	I0127 12:36:23.307099    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:23.803716    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:23.803908    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:23.803908    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:23.804007    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:23.808266    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:23.808368    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:23.808368    9948 round_trippers.go:580]     Audit-Id: 7a18bfcc-33d2-42f2-a4e7-eb722491297e
	I0127 12:36:23.808436    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:23.808436    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:23.808436    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:23.808436    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:23.808436    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:23 GMT
	I0127 12:36:23.808530    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:23.809695    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:36:24.302094    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:24.302094    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:24.302094    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:24.302094    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:24.306229    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:24.306369    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:24.306369    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:24.306369    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:24.306369    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:24 GMT
	I0127 12:36:24.306369    9948 round_trippers.go:580]     Audit-Id: f98ff32a-ded4-49ba-beea-31d01e567f31
	I0127 12:36:24.306369    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:24.306369    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:24.306884    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:24.802032    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:24.802032    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:24.802032    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:24.802032    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:24.806857    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:24.806857    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:24.806857    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:24.806857    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:24.806857    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:24 GMT
	I0127 12:36:24.806857    9948 round_trippers.go:580]     Audit-Id: 7f1ccc0b-fa0b-48eb-8ed8-084905216477
	I0127 12:36:24.806857    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:24.806857    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:24.807228    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:25.301795    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:25.301795    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:25.301795    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:25.301795    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:25.307427    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:25.307427    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:25.307497    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:25.307497    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:25.307497    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:25.307497    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:25 GMT
	I0127 12:36:25.307497    9948 round_trippers.go:580]     Audit-Id: 7a9fc569-4008-44e7-bdcf-213be93d278f
	I0127 12:36:25.307497    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:25.307759    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:25.802906    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:25.802906    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:25.803084    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:25.803084    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:25.808103    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:25.808168    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:25.808168    9948 round_trippers.go:580]     Audit-Id: 775ab9ae-db4b-454f-a6f2-477b0d689244
	I0127 12:36:25.808168    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:25.808168    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:25.808168    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:25.808168    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:25.808168    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:25 GMT
	I0127 12:36:25.808326    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:26.301904    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:26.301904    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:26.301904    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:26.301904    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:26.306406    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:26.306457    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:26.306457    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:26.306457    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:26 GMT
	I0127 12:36:26.306495    9948 round_trippers.go:580]     Audit-Id: d728ea5a-a1eb-46e3-bb98-4fd7ba61b7d1
	I0127 12:36:26.306495    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:26.306495    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:26.306495    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:26.306694    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:26.307678    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:36:26.802281    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:26.802281    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:26.802281    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:26.802281    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:26.805887    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:26.806871    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:26.806871    9948 round_trippers.go:580]     Audit-Id: 86fc58ae-3e2a-4d3e-845b-6f251be9180f
	I0127 12:36:26.806871    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:26.806871    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:26.806871    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:26.806871    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:26.806871    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:26 GMT
	I0127 12:36:26.807272    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:27.302565    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:27.302565    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:27.302565    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:27.302565    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:27.307037    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:27.307037    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:27.307158    9948 round_trippers.go:580]     Audit-Id: bf219283-a7b0-46fb-be16-4b193abe4ae5
	I0127 12:36:27.307158    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:27.307158    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:27.307158    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:27.307158    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:27.307158    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:27 GMT
	I0127 12:36:27.307324    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:27.802414    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:27.802534    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:27.802534    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:27.802534    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:27.805663    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:27.805663    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:27.805663    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:27.805663    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:27.805663    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:27.805663    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:27 GMT
	I0127 12:36:27.805663    9948 round_trippers.go:580]     Audit-Id: f54a1816-a932-4619-8752-0a528c064fa0
	I0127 12:36:27.805663    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:27.806103    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:28.301778    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:28.301778    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:28.301778    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:28.301778    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:28.305749    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:28.305749    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:28.305749    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:28.305749    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:28.305749    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:28.305749    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:28.305749    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:28 GMT
	I0127 12:36:28.305749    9948 round_trippers.go:580]     Audit-Id: 3aefacb0-46be-451d-9889-f58dbbb5649c
	I0127 12:36:28.306304    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:28.802935    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:28.803012    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:28.803012    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:28.803012    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:28.807911    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:28.807979    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:28.807979    9948 round_trippers.go:580]     Audit-Id: 126ab12d-5d06-4665-ab6d-d759801f2588
	I0127 12:36:28.807979    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:28.807979    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:28.807979    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:28.807979    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:28.807979    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:28 GMT
	I0127 12:36:28.809462    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:28.810230    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:36:29.302504    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:29.302808    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:29.302808    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:29.302885    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:29.306698    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:29.306698    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:29.306698    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:29.306698    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:29.306698    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:29.306698    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:29.306698    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:29 GMT
	I0127 12:36:29.306698    9948 round_trippers.go:580]     Audit-Id: cf9acb4c-ff42-4acf-9d6e-8aa371733611
	I0127 12:36:29.306698    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:29.802983    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:29.803054    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:29.803054    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:29.803054    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:29.806805    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:29.806805    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:29.806872    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:29 GMT
	I0127 12:36:29.806872    9948 round_trippers.go:580]     Audit-Id: 95247832-a3d7-4b82-a006-f1613cd7d2f9
	I0127 12:36:29.806872    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:29.806872    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:29.806872    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:29.806872    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:29.807346    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:30.302638    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:30.302638    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:30.302638    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:30.302638    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:30.307303    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:30.307303    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:30.307303    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:30.307303    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:30.307303    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:30.307303    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:30 GMT
	I0127 12:36:30.307303    9948 round_trippers.go:580]     Audit-Id: df371b34-e74a-490f-8ba3-fa30d6ec44c7
	I0127 12:36:30.307303    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:30.307546    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:30.802554    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:30.802554    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:30.802554    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:30.802554    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:30.807573    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:30.807573    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:30.807573    9948 round_trippers.go:580]     Audit-Id: a1e16273-558f-40ff-b196-346cf0d2aafc
	I0127 12:36:30.807573    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:30.807573    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:30.807573    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:30.807573    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:30.807573    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:30 GMT
	I0127 12:36:30.807573    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:31.302489    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:31.302489    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:31.302489    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:31.302489    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:31.307396    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:31.307396    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:31.307396    9948 round_trippers.go:580]     Audit-Id: 81033689-77d8-4e33-a66a-5f5a1e0438dd
	I0127 12:36:31.307396    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:31.307396    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:31.307396    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:31.307632    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:31.307632    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:31 GMT
	I0127 12:36:31.307786    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:31.308381    9948 node_ready.go:53] node "multinode-659000" has status "Ready":"False"
	I0127 12:36:31.801956    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:31.801956    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:31.801956    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:31.801956    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:31.807383    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:31.807383    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:31.807383    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:31.807383    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:31.807383    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:31 GMT
	I0127 12:36:31.807383    9948 round_trippers.go:580]     Audit-Id: 21c0ccaa-4540-48a3-8be8-838ebeee9c2d
	I0127 12:36:31.807383    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:31.807484    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:31.807845    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:32.302894    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:32.302894    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:32.302894    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:32.302894    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:32.307832    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:32.307899    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:32.307899    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:32.307899    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:32.307976    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:32.307976    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:32.307976    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:32 GMT
	I0127 12:36:32.307976    9948 round_trippers.go:580]     Audit-Id: 0f668223-a2c2-43e4-99f5-0513fec4861f
	I0127 12:36:32.308715    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:32.802466    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:32.802466    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:32.802466    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:32.802466    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:32.805440    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:32.805542    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:32.805542    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:32.805542    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:32.805542    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:32 GMT
	I0127 12:36:32.805542    9948 round_trippers.go:580]     Audit-Id: 4d21f474-13f9-4d03-8c4f-788d85208ace
	I0127 12:36:32.805542    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:32.805542    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:32.806058    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1905","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0127 12:36:33.302377    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:33.302377    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:33.302377    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:33.302377    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:33.307502    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:33.307593    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:33.307593    9948 round_trippers.go:580]     Audit-Id: 0d5a2b96-20ee-43d1-94e4-6caac8f3a1bb
	I0127 12:36:33.307593    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:33.307593    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:33.307593    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:33.307593    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:33.307593    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:33 GMT
	I0127 12:36:33.307992    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1987","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0127 12:36:33.308641    9948 node_ready.go:49] node "multinode-659000" has status "Ready":"True"
	I0127 12:36:33.308709    9948 node_ready.go:38] duration metric: took 46.0070183s for node "multinode-659000" to be "Ready" ...
	I0127 12:36:33.308793    9948 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:36:33.308897    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods
	I0127 12:36:33.308897    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:33.308897    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:33.308897    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:33.313244    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:33.313856    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:33.313856    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:33.313856    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:33.313924    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:33 GMT
	I0127 12:36:33.313924    9948 round_trippers.go:580]     Audit-Id: 0730b251-13a7-4fd7-9649-390e753b15c3
	I0127 12:36:33.313924    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:33.313924    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:33.315523    9948 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1988"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 89983 chars]
	I0127 12:36:33.320195    9948 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:33.320195    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:33.320195    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:33.320195    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:33.320195    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:33.322953    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:33.322953    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:33.322953    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:33.322953    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:33.322953    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:33 GMT
	I0127 12:36:33.322953    9948 round_trippers.go:580]     Audit-Id: 2a2a77f8-3199-4e12-b2aa-dec11e378238
	I0127 12:36:33.322953    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:33.323906    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:33.323970    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:33.324546    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:33.324546    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:33.324546    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:33.324744    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:33.327635    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:33.327719    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:33.327719    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:33.327719    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:33.327719    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:33.327719    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:33.327719    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:33 GMT
	I0127 12:36:33.327719    9948 round_trippers.go:580]     Audit-Id: 569e0565-32ac-4968-8993-e251035f54f1
	I0127 12:36:33.327719    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1987","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0127 12:36:33.820814    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:33.820976    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:33.820976    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:33.820976    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:33.826489    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:33.826489    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:33.826489    9948 round_trippers.go:580]     Audit-Id: 77dc52d9-d416-4b93-8086-4cc47fae25db
	I0127 12:36:33.826489    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:33.826489    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:33.826489    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:33.826489    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:33.826489    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:33 GMT
	I0127 12:36:33.827198    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:33.828133    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:33.828133    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:33.828133    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:33.828133    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:33.831399    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:33.831476    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:33.831476    9948 round_trippers.go:580]     Audit-Id: 17e23220-0633-401a-b8ee-a2212ec49798
	I0127 12:36:33.831476    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:33.831476    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:33.831476    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:33.831476    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:33.831476    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:33 GMT
	I0127 12:36:33.831911    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1987","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0127 12:36:34.321352    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:34.321352    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:34.321352    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:34.321352    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:34.326089    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:34.326089    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:34.326089    9948 round_trippers.go:580]     Audit-Id: 62af299b-9422-4a85-9f23-6896769f4a83
	I0127 12:36:34.326089    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:34.326089    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:34.326089    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:34.326089    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:34.326089    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:34 GMT
	I0127 12:36:34.326089    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:34.327429    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:34.327474    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:34.327474    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:34.327520    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:34.329972    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:34.329972    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:34.329972    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:34.329972    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:34.329972    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:34 GMT
	I0127 12:36:34.329972    9948 round_trippers.go:580]     Audit-Id: 391a1c3b-fd97-4a2f-98bd-a95ea28fc080
	I0127 12:36:34.329972    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:34.329972    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:34.330362    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1987","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0127 12:36:34.822026    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:34.822026    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:34.822108    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:34.822108    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:34.827094    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:34.827094    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:34.827094    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:34.827094    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:34.827094    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:34.827094    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:34.827094    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:34 GMT
	I0127 12:36:34.827094    9948 round_trippers.go:580]     Audit-Id: e912fe5d-c2c1-4701-9740-bac6cf17ac06
	I0127 12:36:34.827331    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:34.828034    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:34.828034    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:34.828034    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:34.828148    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:34.831865    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:34.831865    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:34.831865    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:34.831865    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:34.831865    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:34.831865    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:34 GMT
	I0127 12:36:34.831865    9948 round_trippers.go:580]     Audit-Id: a5248d15-b0e5-450b-8d82-d751d51bf412
	I0127 12:36:34.831865    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:34.831865    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1987","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0127 12:36:35.320703    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:35.320703    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:35.320703    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:35.320703    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:35.324643    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:35.324712    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:35.324712    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:35.324712    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:35.324712    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:35.324712    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:35.324712    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:35 GMT
	I0127 12:36:35.324712    9948 round_trippers.go:580]     Audit-Id: ad680456-dba7-4567-8a65-c1931a0ffa52
	I0127 12:36:35.324943    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:35.325795    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:35.325795    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:35.325795    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:35.325795    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:35.328791    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:35.328791    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:35.328791    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:35.328791    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:35.328791    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:35.328791    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:35.328791    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:35 GMT
	I0127 12:36:35.328791    9948 round_trippers.go:580]     Audit-Id: 90bf4e39-bfa2-4e12-817e-7f9382329bcc
	I0127 12:36:35.329723    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:35.330210    9948 pod_ready.go:103] pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:35.820464    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:35.820464    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:35.820464    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:35.820464    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:35.825481    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:35.825481    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:35.825481    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:35.825481    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:35 GMT
	I0127 12:36:35.825481    9948 round_trippers.go:580]     Audit-Id: 6eca0920-d2d5-41bd-9284-cb068ef4926b
	I0127 12:36:35.825481    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:35.825481    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:35.825481    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:35.825481    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:35.826780    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:35.826780    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:35.826780    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:35.826895    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:35.829095    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:35.829778    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:35.829778    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:35.829778    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:35.829778    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:35 GMT
	I0127 12:36:35.829778    9948 round_trippers.go:580]     Audit-Id: 1e6da329-2013-48ae-80ed-2544432dc75f
	I0127 12:36:35.829778    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:35.829778    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:35.830341    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:36.320699    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:36.320699    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:36.320699    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:36.320699    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:36.324439    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:36.324504    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:36.324504    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:36.324504    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:36.324504    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:36.324504    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:36.324504    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:36 GMT
	I0127 12:36:36.324504    9948 round_trippers.go:580]     Audit-Id: f6703be8-18fb-4eec-a00c-a258b1deff1e
	I0127 12:36:36.324754    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:36.325224    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:36.325224    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:36.325224    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:36.325224    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:36.328501    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:36.328533    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:36.328577    9948 round_trippers.go:580]     Audit-Id: 4642b17f-d803-49ff-b56e-45a5abcd4d44
	I0127 12:36:36.328577    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:36.328577    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:36.328577    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:36.328577    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:36.328605    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:36 GMT
	I0127 12:36:36.328935    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:36.821479    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:36.821479    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:36.821479    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:36.821479    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:36.826092    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:36.826092    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:36.826092    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:36 GMT
	I0127 12:36:36.826092    9948 round_trippers.go:580]     Audit-Id: 7fd63d0e-6a21-4817-aa5a-b508421b7477
	I0127 12:36:36.826092    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:36.826092    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:36.826092    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:36.826092    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:36.826328    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:36.826600    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:36.826600    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:36.826600    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:36.826600    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:36.829504    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:36.829796    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:36.829796    9948 round_trippers.go:580]     Audit-Id: 1f89c64e-cfe2-498c-a969-0662949d923d
	I0127 12:36:36.829796    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:36.829796    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:36.829796    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:36.829796    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:36.829796    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:36 GMT
	I0127 12:36:36.830000    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:37.320933    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:37.320933    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:37.320933    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:37.320933    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:37.326193    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:37.326263    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:37.326263    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:37 GMT
	I0127 12:36:37.326263    9948 round_trippers.go:580]     Audit-Id: 9ff4cccb-53dd-4e67-a13e-ffec69ad3ea5
	I0127 12:36:37.326263    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:37.326263    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:37.326263    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:37.326317    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:37.326347    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:37.327309    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:37.327401    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:37.327401    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:37.327401    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:37.330251    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:37.330251    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:37.330251    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:37 GMT
	I0127 12:36:37.330251    9948 round_trippers.go:580]     Audit-Id: 4dd66b4a-6caf-4906-b952-c19c0ebb7d5e
	I0127 12:36:37.330251    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:37.330251    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:37.330251    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:37.330251    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:37.330251    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:37.331669    9948 pod_ready.go:103] pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:37.820544    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:37.820544    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:37.820544    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:37.820544    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:37.825114    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:37.825114    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:37.825114    9948 round_trippers.go:580]     Audit-Id: c458a91e-2744-4255-ad1c-e8f374539e14
	I0127 12:36:37.825114    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:37.825114    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:37.825114    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:37.825114    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:37.825114    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:37 GMT
	I0127 12:36:37.825114    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:37.826218    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:37.826299    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:37.826299    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:37.826299    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:37.829705    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:37.829705    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:37.829705    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:37.829705    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:37 GMT
	I0127 12:36:37.829705    9948 round_trippers.go:580]     Audit-Id: f80ff682-422e-4c89-abd7-2dc22f8a0f47
	I0127 12:36:37.829705    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:37.829705    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:37.829705    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:37.830309    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:38.320686    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:38.320686    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:38.320686    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:38.320686    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:38.326732    9948 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:36:38.326732    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:38.326732    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:38.326732    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:38 GMT
	I0127 12:36:38.326732    9948 round_trippers.go:580]     Audit-Id: 2c09b2e5-209b-41b9-99ee-27d5973e52b5
	I0127 12:36:38.326732    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:38.326732    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:38.326732    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:38.327562    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:38.328413    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:38.328413    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:38.328413    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:38.328413    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:38.331012    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:38.331012    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:38.331012    9948 round_trippers.go:580]     Audit-Id: 90b87926-098e-4f69-a18e-46d806a32bc9
	I0127 12:36:38.331012    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:38.331012    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:38.331012    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:38.331012    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:38.331012    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:38 GMT
	I0127 12:36:38.331012    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:38.821070    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:38.821070    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:38.821070    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:38.821147    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:38.825917    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:38.826036    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:38.826066    9948 round_trippers.go:580]     Audit-Id: 299a4e77-cc72-451a-83e9-006b80ea8b41
	I0127 12:36:38.826066    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:38.826066    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:38.826066    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:38.826142    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:38.826173    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:38 GMT
	I0127 12:36:38.826317    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:38.827062    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:38.827259    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:38.827259    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:38.827259    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:38.829605    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:38.830260    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:38.830260    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:38.830260    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:38.830260    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:38.830260    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:38.830260    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:38 GMT
	I0127 12:36:38.830260    9948 round_trippers.go:580]     Audit-Id: 7e4a598b-43fc-4780-acff-1857a86a40cd
	I0127 12:36:38.830649    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:39.321081    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:39.321081    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:39.321081    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:39.321081    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:39.326041    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:39.326109    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:39.326183    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:39.326183    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:39.326183    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:39.326183    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:39 GMT
	I0127 12:36:39.326183    9948 round_trippers.go:580]     Audit-Id: 4c36b3e8-84b5-4e58-bc8d-091181e93fd6
	I0127 12:36:39.326183    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:39.326478    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:39.327061    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:39.327061    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:39.327061    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:39.327061    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:39.330640    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:39.330894    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:39.330894    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:39.330894    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:39.330894    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:39 GMT
	I0127 12:36:39.330894    9948 round_trippers.go:580]     Audit-Id: 09f4cb5d-f762-492f-8fd2-db25aa633485
	I0127 12:36:39.330894    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:39.330894    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:39.331275    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:39.331774    9948 pod_ready.go:103] pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:39.821389    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:39.821389    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:39.821389    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:39.821389    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:39.825836    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:39.825836    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:39.825836    9948 round_trippers.go:580]     Audit-Id: 96d50f92-e468-4df2-a42e-e12e9a7e7ffd
	I0127 12:36:39.825836    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:39.825836    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:39.825836    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:39.825836    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:39.825836    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:39 GMT
	I0127 12:36:39.825836    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:39.826916    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:39.826916    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:39.826999    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:39.826999    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:39.829988    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:39.829988    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:39.829988    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:39.829988    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:39.829988    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:39.830106    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:39 GMT
	I0127 12:36:39.830106    9948 round_trippers.go:580]     Audit-Id: 94ec9d44-3c5f-425f-ae27-f6e93ff23189
	I0127 12:36:39.830106    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:39.830397    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:40.321366    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:40.321366    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:40.321366    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:40.321366    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:40.326284    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:40.326284    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:40.326284    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:40 GMT
	I0127 12:36:40.326284    9948 round_trippers.go:580]     Audit-Id: 6619e06d-2270-494b-a3bf-75378b31fa38
	I0127 12:36:40.326284    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:40.326284    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:40.326284    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:40.326284    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:40.326907    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:40.327931    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:40.327931    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:40.328024    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:40.328024    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:40.330814    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:40.331793    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:40.331793    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:40.331793    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:40.331793    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:40.331793    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:40.331793    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:40 GMT
	I0127 12:36:40.331793    9948 round_trippers.go:580]     Audit-Id: 0ec7db17-f855-4e00-815c-08829cd9975f
	I0127 12:36:40.332042    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:40.820861    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:40.820861    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:40.820861    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:40.820861    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:40.824107    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:40.824107    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:40.824107    9948 round_trippers.go:580]     Audit-Id: 5d1bf74e-6e66-40f9-9b35-7673a4dea054
	I0127 12:36:40.824107    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:40.824107    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:40.824107    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:40.824107    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:40.824107    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:40 GMT
	I0127 12:36:40.824107    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:40.825263    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:40.825263    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:40.825263    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:40.825263    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:40.827900    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:40.827986    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:40.827986    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:40.827986    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:40.827986    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:40.827986    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:40 GMT
	I0127 12:36:40.827986    9948 round_trippers.go:580]     Audit-Id: d1e6c02c-8c4b-437d-9853-871133e118cc
	I0127 12:36:40.828069    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:40.828556    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:41.320979    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:41.320979    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:41.320979    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:41.320979    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:41.324611    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:41.324611    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:41.324611    9948 round_trippers.go:580]     Audit-Id: c3a64e1c-1ba6-4071-906e-0f94efdc34c9
	I0127 12:36:41.324611    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:41.324611    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:41.324611    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:41.324611    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:41.324611    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:41 GMT
	I0127 12:36:41.325328    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:41.326018    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:41.326075    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:41.326075    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:41.326075    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:41.328384    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:41.328384    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:41.328452    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:41.328452    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:41.328452    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:41.328452    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:41 GMT
	I0127 12:36:41.328452    9948 round_trippers.go:580]     Audit-Id: eb33792e-b287-47e5-85df-360fd77dbb66
	I0127 12:36:41.328452    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:41.328795    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:41.821678    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:41.821678    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:41.821678    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:41.821678    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:41.825439    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:41.825531    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:41.825531    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:41.825531    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:41.825609    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:41.825609    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:41 GMT
	I0127 12:36:41.825609    9948 round_trippers.go:580]     Audit-Id: 35bc2cd2-c8a6-4622-aee6-25efa69650d4
	I0127 12:36:41.825609    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:41.825763    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:41.826356    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:41.826356    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:41.826356    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:41.826356    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:41.828944    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:41.828944    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:41.828944    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:41 GMT
	I0127 12:36:41.828944    9948 round_trippers.go:580]     Audit-Id: 8942abae-06d9-445c-9ad6-6bc988527c6c
	I0127 12:36:41.828944    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:41.828944    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:41.828944    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:41.828944    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:41.831215    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:41.831215    9948 pod_ready.go:103] pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:42.321254    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:42.321254    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:42.321254    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:42.321254    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:42.326755    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:42.326812    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:42.326812    9948 round_trippers.go:580]     Audit-Id: 5c4fedb5-53d9-4fe2-8d0b-8480313db713
	I0127 12:36:42.326812    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:42.326812    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:42.326863    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:42.326863    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:42.326863    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:42 GMT
	I0127 12:36:42.327080    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:42.328025    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:42.328054    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:42.328054    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:42.328110    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:42.329940    9948 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0127 12:36:42.331030    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:42.331058    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:42.331058    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:42.331058    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:42 GMT
	I0127 12:36:42.331058    9948 round_trippers.go:580]     Audit-Id: a985d4d4-cc11-41d8-9b1e-8d8326154bf2
	I0127 12:36:42.331058    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:42.331058    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:42.331459    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:42.821058    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:42.821554    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:42.821554    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:42.821554    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:42.825087    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:42.825087    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:42.825087    9948 round_trippers.go:580]     Audit-Id: c868490d-2c4f-4a10-b693-f51d31e7322b
	I0127 12:36:42.825087    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:42.826094    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:42.826094    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:42.826094    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:42.826094    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:42 GMT
	I0127 12:36:42.826320    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:42.827999    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:42.827999    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:42.827999    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:42.827999    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:42.837542    9948 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0127 12:36:42.838395    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:42.838395    9948 round_trippers.go:580]     Audit-Id: 87a72fea-1b12-4eb8-a62f-0400451dab7d
	I0127 12:36:42.838395    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:42.838395    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:42.838395    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:42.838395    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:42.838395    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:42 GMT
	I0127 12:36:42.838800    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:43.320919    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:43.320992    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:43.321063    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:43.321063    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:43.324954    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:43.325088    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:43.325088    9948 round_trippers.go:580]     Audit-Id: 54edfd64-ac5d-47fe-9191-dc131ebcc440
	I0127 12:36:43.325088    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:43.325088    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:43.325088    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:43.325088    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:43.325137    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:43 GMT
	I0127 12:36:43.325241    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:43.326032    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:43.326085    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:43.326085    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:43.326085    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:43.329040    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:43.329040    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:43.329040    9948 round_trippers.go:580]     Audit-Id: b9a89f32-83aa-4d39-87d0-70654bfe1e2e
	I0127 12:36:43.329040    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:43.329040    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:43.329040    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:43.329040    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:43.329040    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:43 GMT
	I0127 12:36:43.329594    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:43.820553    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:43.820553    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:43.820553    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:43.820553    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:43.825900    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:43.825900    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:43.825900    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:43.826088    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:43.826088    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:43.826088    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:43 GMT
	I0127 12:36:43.826088    9948 round_trippers.go:580]     Audit-Id: c26c8ce4-3843-44e5-9264-5e1e574bbd8f
	I0127 12:36:43.826088    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:43.826280    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:43.827541    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:43.827541    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:43.827541    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:43.827541    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:43.831828    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:43.831828    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:43.831828    9948 round_trippers.go:580]     Audit-Id: 91dfcc2a-cbdf-4951-801f-a5427f673887
	I0127 12:36:43.831828    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:43.831828    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:43.831828    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:43.831828    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:43.831828    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:43 GMT
	I0127 12:36:43.831828    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:43.832707    9948 pod_ready.go:103] pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:44.320690    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:44.320690    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:44.320690    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:44.320690    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:44.323966    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:44.323966    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:44.323966    9948 round_trippers.go:580]     Audit-Id: 6cc2ce16-ca4e-4a07-95da-141006da92b2
	I0127 12:36:44.324069    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:44.324069    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:44.324069    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:44.324069    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:44.324069    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:44 GMT
	I0127 12:36:44.324166    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:44.324885    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:44.324885    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:44.324885    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:44.324885    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:44.328506    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:44.328506    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:44.328595    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:44.328595    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:44 GMT
	I0127 12:36:44.328595    9948 round_trippers.go:580]     Audit-Id: d4ba006e-97f6-427c-91d1-4425543d2724
	I0127 12:36:44.328595    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:44.328595    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:44.328595    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:44.328868    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:44.820894    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:44.821461    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:44.821461    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:44.821461    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:44.825409    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:44.825409    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:44.825409    9948 round_trippers.go:580]     Audit-Id: 12cb0afd-4713-40d7-ba85-d35466c0e5c5
	I0127 12:36:44.825409    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:44.825409    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:44.825409    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:44.825409    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:44.825409    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:44 GMT
	I0127 12:36:44.825649    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:44.826364    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:44.826467    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:44.826467    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:44.826467    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:44.828367    9948 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0127 12:36:44.829246    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:44.829246    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:44.829246    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:44.829246    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:44 GMT
	I0127 12:36:44.829246    9948 round_trippers.go:580]     Audit-Id: 8881fe61-2c50-4442-bdff-ee08c1492cfd
	I0127 12:36:44.829246    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:44.829246    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:44.829562    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:45.321583    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:45.321583    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:45.321583    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:45.321583    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:45.325435    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:45.325497    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:45.325497    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:45.325497    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:45.325497    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:45 GMT
	I0127 12:36:45.325497    9948 round_trippers.go:580]     Audit-Id: 15bd9c8b-72ad-4d79-82a2-43838af25a23
	I0127 12:36:45.325497    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:45.325497    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:45.325717    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:45.326691    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:45.326766    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:45.326766    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:45.326766    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:45.329998    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:45.330034    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:45.330089    9948 round_trippers.go:580]     Audit-Id: 50ad3c45-2ddd-4675-b901-57772ffb59c7
	I0127 12:36:45.330089    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:45.330089    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:45.330089    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:45.330089    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:45.330089    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:45 GMT
	I0127 12:36:45.330250    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:45.820465    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:45.820465    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:45.820465    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:45.820465    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:45.825690    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:45.825805    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:45.825805    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:45.825805    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:45.825805    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:45 GMT
	I0127 12:36:45.825805    9948 round_trippers.go:580]     Audit-Id: c74d55b4-0078-4574-bf09-8b00be6fac2b
	I0127 12:36:45.825805    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:45.825805    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:45.826175    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:45.826982    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:45.826982    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:45.826982    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:45.826982    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:45.833217    9948 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:36:45.833298    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:45.833298    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:45.833298    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:45 GMT
	I0127 12:36:45.833298    9948 round_trippers.go:580]     Audit-Id: d5d93877-e067-4d8c-8e62-cd8b20d3e3bf
	I0127 12:36:45.833298    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:45.833298    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:45.833298    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:45.834363    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:45.835468    9948 pod_ready.go:103] pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:46.321354    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:46.321354    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:46.321354    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:46.321354    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:46.324390    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:46.324390    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:46.324390    9948 round_trippers.go:580]     Audit-Id: d8ce2e3b-cfe5-4d49-a4eb-1a03a2828629
	I0127 12:36:46.324390    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:46.324390    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:46.324390    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:46.324390    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:46.324390    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:46 GMT
	I0127 12:36:46.324390    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:46.324390    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:46.324390    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:46.324390    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:46.324390    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:46.331225    9948 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:36:46.331225    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:46.331225    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:46.331225    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:46.331225    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:46 GMT
	I0127 12:36:46.331225    9948 round_trippers.go:580]     Audit-Id: 411188c6-0279-40df-978f-7d9770829b9f
	I0127 12:36:46.331225    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:46.331225    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:46.331651    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:46.821298    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:46.821298    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:46.821298    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:46.821298    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:46.825935    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:46.826017    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:46.826017    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:46.826017    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:46.826017    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:46 GMT
	I0127 12:36:46.826017    9948 round_trippers.go:580]     Audit-Id: 04df8bcb-1dcc-46db-a73f-48b4c3d191d6
	I0127 12:36:46.826124    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:46.826124    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:46.826354    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:46.826530    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:46.826530    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:46.827101    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:46.827101    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:46.836171    9948 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0127 12:36:46.836171    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:46.836171    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:46.836171    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:46 GMT
	I0127 12:36:46.836171    9948 round_trippers.go:580]     Audit-Id: 57c979ff-a6b0-44f7-8b81-9d083bd9c742
	I0127 12:36:46.836171    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:46.836171    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:46.836171    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:46.836171    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:47.321368    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:47.321368    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:47.321368    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:47.321368    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:47.326355    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:47.326422    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:47.326422    9948 round_trippers.go:580]     Audit-Id: adb60445-50b1-4c22-b003-d57848839eaf
	I0127 12:36:47.326422    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:47.326422    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:47.326422    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:47.326422    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:47.326499    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:47 GMT
	I0127 12:36:47.326808    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"1873","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7099 chars]
	I0127 12:36:47.327737    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:47.327737    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:47.327737    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:47.327737    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:47.331334    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:47.331334    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:47.331334    9948 round_trippers.go:580]     Audit-Id: f91dfa9b-ddec-459b-bf52-f4584a60279d
	I0127 12:36:47.331419    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:47.331419    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:47.331419    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:47.331419    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:47.331419    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:47 GMT
	I0127 12:36:47.331615    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:47.821043    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:47.821043    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:47.821043    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:47.821043    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:47.828936    9948 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 12:36:47.828936    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:47.828936    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:47.828936    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:47 GMT
	I0127 12:36:47.828936    9948 round_trippers.go:580]     Audit-Id: d0f8558a-8914-401e-a890-28f3f3846e20
	I0127 12:36:47.828936    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:47.828936    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:47.828936    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:47.828936    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"2021","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7275 chars]
	I0127 12:36:47.830137    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:47.830208    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:47.830208    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:47.830208    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:47.830469    9948 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0127 12:36:47.830469    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:47.830469    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:47.830469    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:47.830469    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:47 GMT
	I0127 12:36:47.830469    9948 round_trippers.go:580]     Audit-Id: 2c93b556-f219-4bc3-bcb2-534a9256833e
	I0127 12:36:47.830469    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:47.830469    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:47.835693    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:47.835693    9948 pod_ready.go:103] pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace has status "Ready":"False"
	I0127 12:36:48.320957    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:48.320957    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:48.320957    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:48.320957    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:48.325256    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:48.325337    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:48.325337    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:48.325337    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:48 GMT
	I0127 12:36:48.325337    9948 round_trippers.go:580]     Audit-Id: 59cd427f-2dda-4af3-b85d-8a9951703b09
	I0127 12:36:48.325337    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:48.325337    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:48.325337    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:48.326379    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"2021","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7275 chars]
	I0127 12:36:48.327250    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:48.327289    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:48.327351    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:48.327351    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:48.330734    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:48.330791    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:48.330791    9948 round_trippers.go:580]     Audit-Id: addd9b7d-8852-42f4-bd35-b5d52ca2b2ec
	I0127 12:36:48.330791    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:48.330791    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:48.330791    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:48.330791    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:48.330791    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:48 GMT
	I0127 12:36:48.330791    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:48.820925    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-2qw6w
	I0127 12:36:48.820925    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:48.820925    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:48.820925    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:48.826876    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:36:48.826876    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:48.826876    9948 round_trippers.go:580]     Audit-Id: 5fa8e9ff-a6a3-429c-8e2a-72e15c9f7add
	I0127 12:36:48.826876    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:48.826876    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:48.826876    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:48.826876    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:48.826876    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:48 GMT
	I0127 12:36:48.827727    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"2024","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7046 chars]
	I0127 12:36:48.828532    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:48.828705    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:48.828705    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:48.828705    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:48.835065    9948 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0127 12:36:48.835065    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:48.835065    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:48.835065    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:48 GMT
	I0127 12:36:48.835622    9948 round_trippers.go:580]     Audit-Id: 6acc7d47-78ce-490c-978c-9a4f4e210905
	I0127 12:36:48.835622    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:48.835622    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:48.835622    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:48.835686    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:48.836289    9948 pod_ready.go:93] pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:48.836289    9948 pod_ready.go:82] duration metric: took 15.5159311s for pod "coredns-668d6bf9bc-2qw6w" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:48.836289    9948 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:48.836289    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-659000
	I0127 12:36:48.836289    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:48.836289    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:48.836289    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:48.839500    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:48.839500    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:48.839588    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:48.839588    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:48 GMT
	I0127 12:36:48.839588    9948 round_trippers.go:580]     Audit-Id: e21fa624-ca15-457a-87ec-77af1716c28f
	I0127 12:36:48.839588    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:48.839588    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:48.839588    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:48.840031    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-659000","namespace":"kube-system","uid":"4c33fa42-51a7-4a7a-a497-cce80b8773d6","resourceVersion":"1939","creationTimestamp":"2025-01-27T12:35:42Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.29.198.106:2379","kubernetes.io/config.hash":"575cefa3aa8017dce576fa244e719a4e","kubernetes.io/config.mirror":"575cefa3aa8017dce576fa244e719a4e","kubernetes.io/config.seen":"2025-01-27T12:35:36.285837685Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:35:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6617 chars]
	I0127 12:36:48.840454    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:48.840454    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:48.840454    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:48.840454    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:48.842619    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:48.843248    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:48.843248    9948 round_trippers.go:580]     Audit-Id: 47a86454-86a5-4234-8b43-573632b52286
	I0127 12:36:48.843248    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:48.843248    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:48.843248    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:48.843318    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:48.843318    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:48 GMT
	I0127 12:36:48.843557    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:48.844012    9948 pod_ready.go:93] pod "etcd-multinode-659000" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:48.844012    9948 pod_ready.go:82] duration metric: took 7.7226ms for pod "etcd-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:48.844088    9948 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:48.844196    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-659000
	I0127 12:36:48.844196    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:48.844196    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:48.844196    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:48.846871    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:48.846871    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:48.846871    9948 round_trippers.go:580]     Audit-Id: 26221292-c660-4a10-ab6a-632192a23b5a
	I0127 12:36:48.846871    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:48.846871    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:48.846871    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:48.846871    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:48.846871    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:48 GMT
	I0127 12:36:48.846871    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-659000","namespace":"kube-system","uid":"8fbee94f-fd8f-4431-bd9f-b75d49cb19d4","resourceVersion":"1937","creationTimestamp":"2025-01-27T12:35:42Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.29.198.106:8443","kubernetes.io/config.hash":"b9fbd177058ba298cde2a92c4ef5c601","kubernetes.io/config.mirror":"b9fbd177058ba298cde2a92c4ef5c601","kubernetes.io/config.seen":"2025-01-27T12:35:36.265565317Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:35:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8049 chars]
	I0127 12:36:48.847970    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:48.847970    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:48.847970    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:48.848039    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:48.850747    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:48.851008    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:48.851008    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:48 GMT
	I0127 12:36:48.851008    9948 round_trippers.go:580]     Audit-Id: f08e9b74-0454-4e56-b61c-b25ad72ecf29
	I0127 12:36:48.851008    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:48.851008    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:48.851008    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:48.851091    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:48.851793    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:48.852395    9948 pod_ready.go:93] pod "kube-apiserver-multinode-659000" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:48.852395    9948 pod_ready.go:82] duration metric: took 8.3073ms for pod "kube-apiserver-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:48.852486    9948 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:48.852559    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-659000
	I0127 12:36:48.852632    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:48.852632    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:48.852687    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:48.854785    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:48.855165    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:48.855165    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:48.855165    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:48.855165    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:48 GMT
	I0127 12:36:48.855165    9948 round_trippers.go:580]     Audit-Id: 47f73fc3-4509-4187-b156-bbc3ae52477b
	I0127 12:36:48.855165    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:48.855165    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:48.855438    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-659000","namespace":"kube-system","uid":"8be02f36-161c-44f3-b526-56db3b8a007a","resourceVersion":"1923","creationTimestamp":"2025-01-27T12:11:59Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4a14d0700eafa36dd3913955f2c0f839","kubernetes.io/config.mirror":"4a14d0700eafa36dd3913955f2c0f839","kubernetes.io/config.seen":"2025-01-27T12:11:59.106472767Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:11:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7475 chars]
	I0127 12:36:48.855926    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:48.856366    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:48.856366    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:48.856366    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:48.859227    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:48.859227    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:48.859227    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:48.859227    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:48.859227    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:48.859864    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:48 GMT
	I0127 12:36:48.859864    9948 round_trippers.go:580]     Audit-Id: c3c29682-020b-4e2e-8559-65e52d1018d6
	I0127 12:36:48.859864    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:48.859898    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:48.860452    9948 pod_ready.go:93] pod "kube-controller-manager-multinode-659000" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:48.860523    9948 pod_ready.go:82] duration metric: took 8.0371ms for pod "kube-controller-manager-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:48.860523    9948 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pjhc8" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:48.860623    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pjhc8
	I0127 12:36:48.860702    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:48.860702    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:48.860702    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:48.864925    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:48.864925    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:48.864925    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:48.864925    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:48.864925    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:48.864925    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:48.864925    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:48 GMT
	I0127 12:36:48.864925    9948 round_trippers.go:580]     Audit-Id: 4bbd0dc8-ffb8-4d23-b1b0-4f7186552d1f
	I0127 12:36:48.864925    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pjhc8","generateName":"kube-proxy-","namespace":"kube-system","uid":"ddb6698c-b83d-4a49-9672-c894e87cbb66","resourceVersion":"1998","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d88eb776-b464-4f2b-8140-44249610a7fa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d88eb776-b464-4f2b-8140-44249610a7fa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6433 chars]
	I0127 12:36:48.864925    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000-m02
	I0127 12:36:48.864925    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:48.864925    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:48.864925    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:48.868510    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:48.868510    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:48.868510    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:48.868510    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:48.868510    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:48.868510    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:48 GMT
	I0127 12:36:48.868510    9948 round_trippers.go:580]     Audit-Id: 51226419-bf8e-4030-9631-bc750d16862c
	I0127 12:36:48.868510    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:48.868510    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m02","uid":"fb24127b-99b0-4aa2-be46-1c6fd0901530","resourceVersion":"2006","creationTimestamp":"2025-01-27T12:15:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_15_08_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:15:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4584 chars]
	I0127 12:36:48.869060    9948 pod_ready.go:98] node "multinode-659000-m02" hosting pod "kube-proxy-pjhc8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000-m02" has status "Ready":"Unknown"
	I0127 12:36:48.869060    9948 pod_ready.go:82] duration metric: took 8.537ms for pod "kube-proxy-pjhc8" in "kube-system" namespace to be "Ready" ...
	E0127 12:36:48.869060    9948 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-659000-m02" hosting pod "kube-proxy-pjhc8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000-m02" has status "Ready":"Unknown"
	I0127 12:36:48.869060    9948 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-s46mv" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:49.021437    9948 request.go:632] Waited for 152.3757ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s46mv
	I0127 12:36:49.021738    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s46mv
	I0127 12:36:49.021788    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:49.021788    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:49.021788    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:49.025093    9948 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0127 12:36:49.025093    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:49.025093    9948 round_trippers.go:580]     Audit-Id: 48044657-793a-45cb-b316-6a60c1c86261
	I0127 12:36:49.025093    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:49.025177    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:49.025177    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:49.025177    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:49.025177    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:49 GMT
	I0127 12:36:49.025529    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s46mv","generateName":"kube-proxy-","namespace":"kube-system","uid":"ae3b8daf-d674-4cfe-8652-cb5ff6ba8615","resourceVersion":"1898","creationTimestamp":"2025-01-27T12:12:03Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d88eb776-b464-4f2b-8140-44249610a7fa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:03Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d88eb776-b464-4f2b-8140-44249610a7fa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6405 chars]
	I0127 12:36:49.222034    9948 request.go:632] Waited for 196.1432ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:49.222441    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:49.222441    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:49.222559    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:49.222559    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:49.226229    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:36:49.226326    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:49.226326    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:49.226326    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:49.226326    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:49 GMT
	I0127 12:36:49.226326    9948 round_trippers.go:580]     Audit-Id: e759def6-4238-4b1c-9744-c9caa6aea460
	I0127 12:36:49.226326    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:49.226326    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:49.226885    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:49.227018    9948 pod_ready.go:93] pod "kube-proxy-s46mv" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:49.227018    9948 pod_ready.go:82] duration metric: took 357.9544ms for pod "kube-proxy-s46mv" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:49.227018    9948 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sk5js" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:49.421122    9948 request.go:632] Waited for 193.5557ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sk5js
	I0127 12:36:49.421609    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sk5js
	I0127 12:36:49.421609    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:49.421609    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:49.421609    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:49.426007    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:49.426090    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:49.426090    9948 round_trippers.go:580]     Audit-Id: a9f266d8-14da-4408-96ab-db5223079ceb
	I0127 12:36:49.426213    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:49.426213    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:49.426213    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:49.426236    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:49.426236    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:49 GMT
	I0127 12:36:49.426618    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-sk5js","generateName":"kube-proxy-","namespace":"kube-system","uid":"ba679e1d-713c-4bd4-b267-2b887c1ac4df","resourceVersion":"1793","creationTimestamp":"2025-01-27T12:19:54Z","labels":{"controller-revision-hash":"566d7b9f85","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"d88eb776-b464-4f2b-8140-44249610a7fa","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:19:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d88eb776-b464-4f2b-8140-44249610a7fa\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6428 chars]
	I0127 12:36:49.621641    9948 request.go:632] Waited for 194.518ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.198.106:8443/api/v1/nodes/multinode-659000-m03
	I0127 12:36:49.621641    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000-m03
	I0127 12:36:49.621641    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:49.621641    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:49.621641    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:49.626140    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:49.626140    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:49.626140    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:49.626140    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:49 GMT
	I0127 12:36:49.626140    9948 round_trippers.go:580]     Audit-Id: d3690c0b-4873-477a-9ad1-7656393a8fd0
	I0127 12:36:49.626140    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:49.626140    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:49.626140    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:49.626140    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000-m03","uid":"0516f5fa-16ad-40aa-9616-01d098e46466","resourceVersion":"1941","creationTimestamp":"2025-01-27T12:31:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2025_01_27T12_31_04_0700","minikube.k8s.io/version":"v1.35.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:31:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4400 chars]
	I0127 12:36:49.626894    9948 pod_ready.go:98] node "multinode-659000-m03" hosting pod "kube-proxy-sk5js" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000-m03" has status "Ready":"Unknown"
	I0127 12:36:49.626962    9948 pod_ready.go:82] duration metric: took 399.8713ms for pod "kube-proxy-sk5js" in "kube-system" namespace to be "Ready" ...
	E0127 12:36:49.626962    9948 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-659000-m03" hosting pod "kube-proxy-sk5js" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-659000-m03" has status "Ready":"Unknown"
	I0127 12:36:49.626962    9948 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:49.821366    9948 request.go:632] Waited for 194.3415ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-659000
	I0127 12:36:49.821750    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-659000
	I0127 12:36:49.821750    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:49.821750    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:49.821750    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:49.826259    9948 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0127 12:36:49.826332    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:49.826332    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:49 GMT
	I0127 12:36:49.826395    9948 round_trippers.go:580]     Audit-Id: 6c855360-0c03-4c39-a29d-4242802315c2
	I0127 12:36:49.826395    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:49.826395    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:49.826395    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:49.826395    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:49.826725    9948 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-659000","namespace":"kube-system","uid":"52b91964-a331-4925-9e07-c8df32b4176d","resourceVersion":"1925","creationTimestamp":"2025-01-27T12:11:58Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"e6c90fc43fa6c0754218ff1c4162045d","kubernetes.io/config.mirror":"e6c90fc43fa6c0754218ff1c4162045d","kubernetes.io/config.seen":"2025-01-27T12:11:51.419790825Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:11:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5568 chars]
	I0127 12:36:50.021561    9948 request.go:632] Waited for 194.5366ms due to client-side throttling, not priority and fairness, request: GET:https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:50.021561    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes/multinode-659000
	I0127 12:36:50.021561    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:50.021561    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:50.021561    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:50.029232    9948 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 12:36:50.029299    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:50.029299    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:50.029299    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:50 GMT
	I0127 12:36:50.029299    9948 round_trippers.go:580]     Audit-Id: ba8f26d1-188c-464b-b234-9de842093182
	I0127 12:36:50.029299    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:50.029299    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:50.029299    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:50.029299    9948 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2025-01-27T12:11:55Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0127 12:36:50.030220    9948 pod_ready.go:93] pod "kube-scheduler-multinode-659000" in "kube-system" namespace has status "Ready":"True"
	I0127 12:36:50.030220    9948 pod_ready.go:82] duration metric: took 403.2533ms for pod "kube-scheduler-multinode-659000" in "kube-system" namespace to be "Ready" ...
	I0127 12:36:50.030220    9948 pod_ready.go:39] duration metric: took 16.7212511s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0127 12:36:50.030220    9948 api_server.go:52] waiting for apiserver process to appear ...
	I0127 12:36:50.042588    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 12:36:50.073589    9948 command_runner.go:130] > ea993630a310
	I0127 12:36:50.073699    9948 logs.go:282] 1 containers: [ea993630a310]
	I0127 12:36:50.083119    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 12:36:50.108205    9948 command_runner.go:130] > 0ef2a3b50bae
	I0127 12:36:50.108275    9948 logs.go:282] 1 containers: [0ef2a3b50bae]
	I0127 12:36:50.121182    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 12:36:50.150046    9948 command_runner.go:130] > b3a9ed6e130c
	I0127 12:36:50.150046    9948 command_runner.go:130] > f818dd15d8b0
	I0127 12:36:50.150046    9948 logs.go:282] 2 containers: [b3a9ed6e130c f818dd15d8b0]
	I0127 12:36:50.159402    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 12:36:50.182881    9948 command_runner.go:130] > ed51c7eaa966
	I0127 12:36:50.182881    9948 command_runner.go:130] > a16e06a03860
	I0127 12:36:50.184878    9948 logs.go:282] 2 containers: [ed51c7eaa966 a16e06a03860]
	I0127 12:36:50.194142    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 12:36:50.215471    9948 command_runner.go:130] > 0283b35dee3c
	I0127 12:36:50.215471    9948 command_runner.go:130] > bbec7ccef7da
	I0127 12:36:50.218082    9948 logs.go:282] 2 containers: [0283b35dee3c bbec7ccef7da]
	I0127 12:36:50.227809    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 12:36:50.253979    9948 command_runner.go:130] > 8d4872cda28d
	I0127 12:36:50.253979    9948 command_runner.go:130] > e07a66f8f619
	I0127 12:36:50.253979    9948 logs.go:282] 2 containers: [8d4872cda28d e07a66f8f619]
	I0127 12:36:50.263626    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0127 12:36:50.286152    9948 command_runner.go:130] > 373bec67270f
	I0127 12:36:50.286152    9948 command_runner.go:130] > d758000dda95
	I0127 12:36:50.287448    9948 logs.go:282] 2 containers: [373bec67270f d758000dda95]
	I0127 12:36:50.287542    9948 logs.go:123] Gathering logs for Docker ...
	I0127 12:36:50.287542    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0127 12:36:50.318623    9948 command_runner.go:130] > Jan 27 12:34:11 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0127 12:36:50.318623    9948 command_runner.go:130] > Jan 27 12:34:11 minikube cri-dockerd[223]: time="2025-01-27T12:34:11Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0127 12:36:50.318697    9948 command_runner.go:130] > Jan 27 12:34:11 minikube cri-dockerd[223]: time="2025-01-27T12:34:11Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0127 12:36:50.318697    9948 command_runner.go:130] > Jan 27 12:34:11 minikube cri-dockerd[223]: time="2025-01-27T12:34:11Z" level=info msg="Start docker client with request timeout 0s"
	I0127 12:36:50.318697    9948 command_runner.go:130] > Jan 27 12:34:11 minikube cri-dockerd[223]: time="2025-01-27T12:34:11Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0127 12:36:50.318764    9948 command_runner.go:130] > Jan 27 12:34:11 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:50.318764    9948 command_runner.go:130] > Jan 27 12:34:11 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0127 12:36:50.318764    9948 command_runner.go:130] > Jan 27 12:34:11 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0127 12:36:50.318824    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0127 12:36:50.318824    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0127 12:36:50.318824    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0127 12:36:50.318824    9948 command_runner.go:130] > Jan 27 12:34:14 minikube cri-dockerd[404]: time="2025-01-27T12:34:14Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0127 12:36:50.318824    9948 command_runner.go:130] > Jan 27 12:34:14 minikube cri-dockerd[404]: time="2025-01-27T12:34:14Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0127 12:36:50.318882    9948 command_runner.go:130] > Jan 27 12:34:14 minikube cri-dockerd[404]: time="2025-01-27T12:34:14Z" level=info msg="Start docker client with request timeout 0s"
	I0127 12:36:50.318953    9948 command_runner.go:130] > Jan 27 12:34:14 minikube cri-dockerd[404]: time="2025-01-27T12:34:14Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0127 12:36:50.318953    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:50.318953    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0127 12:36:50.319037    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0127 12:36:50.319037    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0127 12:36:50.319037    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0127 12:36:50.319064    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0127 12:36:50.319064    9948 command_runner.go:130] > Jan 27 12:34:16 minikube cri-dockerd[425]: time="2025-01-27T12:34:16Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0127 12:36:50.319064    9948 command_runner.go:130] > Jan 27 12:34:16 minikube cri-dockerd[425]: time="2025-01-27T12:34:16Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0127 12:36:50.319115    9948 command_runner.go:130] > Jan 27 12:34:16 minikube cri-dockerd[425]: time="2025-01-27T12:34:16Z" level=info msg="Start docker client with request timeout 0s"
	I0127 12:36:50.319115    9948 command_runner.go:130] > Jan 27 12:34:16 minikube cri-dockerd[425]: time="2025-01-27T12:34:16Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0127 12:36:50.319173    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:50.319173    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0127 12:36:50.319173    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0127 12:36:50.319173    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0127 12:36:50.319173    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0127 12:36:50.319244    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0127 12:36:50.319244    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0127 12:36:50.319288    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0127 12:36:50.319288    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 systemd[1]: Starting Docker Application Container Engine...
	I0127 12:36:50.319382    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[653]: time="2025-01-27T12:35:01.316616305Z" level=info msg="Starting up"
	I0127 12:36:50.319382    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[653]: time="2025-01-27T12:35:01.317424338Z" level=info msg="containerd not running, starting managed containerd"
	I0127 12:36:50.319417    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[653]: time="2025-01-27T12:35:01.318870498Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=659
	I0127 12:36:50.319454    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.350184287Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0127 12:36:50.319454    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374094572Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0127 12:36:50.319501    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374181575Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0127 12:36:50.319501    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374315681Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0127 12:36:50.319557    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374337282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:50.319557    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374861203Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:50.319557    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374889804Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:50.319557    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375040811Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:50.319642    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375239819Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:50.319667    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375267320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:50.319709    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375281220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:50.319709    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375833643Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:50.319709    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.376559373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:50.319760    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.379449292Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:50.319760    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.379538296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:50.319876    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.379661901Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:50.319981    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.379800807Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0127 12:36:50.319981    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.380313228Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0127 12:36:50.319981    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.380441533Z" level=info msg="metadata content store policy set" policy=shared
	I0127 12:36:50.320100    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.385960360Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0127 12:36:50.320100    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386099266Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0127 12:36:50.320100    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386121867Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0127 12:36:50.320100    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386137768Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0127 12:36:50.320184    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386151968Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0127 12:36:50.320184    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386229971Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0127 12:36:50.320184    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386475981Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0127 12:36:50.320184    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386600687Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0127 12:36:50.320269    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386685890Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0127 12:36:50.320269    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386757893Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0127 12:36:50.320365    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386815695Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.320365    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386833196Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.320365    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386854497Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.320427    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386882698Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.320427    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386897399Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.320427    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386908999Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.320427    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386920500Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.320427    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386931000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.320512    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386948401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320512    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386962701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320538    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387079606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320578    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387099107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320578    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387131708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320578    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387149509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320660    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387164010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320660    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387179110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320660    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387194311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320660    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387212812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320660    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387227412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320743    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387242613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320769    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387257314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320769    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387275514Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0127 12:36:50.320808    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387300315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320808    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387352418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320808    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387385019Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0127 12:36:50.320859    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387423920Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0127 12:36:50.320859    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387443921Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0127 12:36:50.320914    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387454422Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0127 12:36:50.320914    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387465222Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0127 12:36:50.320967    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387473923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.320967    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387486423Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0127 12:36:50.321041    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387496523Z" level=info msg="NRI interface is disabled by configuration."
	I0127 12:36:50.321041    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.388077647Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0127 12:36:50.321041    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.388176351Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0127 12:36:50.321093    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.388221553Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0127 12:36:50.321093    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.388239554Z" level=info msg="containerd successfully booted in 0.040630s"
	I0127 12:36:50.321093    9948 command_runner.go:130] > Jan 27 12:35:02 multinode-659000 dockerd[653]: time="2025-01-27T12:35:02.375461301Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0127 12:36:50.321093    9948 command_runner.go:130] > Jan 27 12:35:02 multinode-659000 dockerd[653]: time="2025-01-27T12:35:02.619440119Z" level=info msg="Loading containers: start."
	I0127 12:36:50.321152    9948 command_runner.go:130] > Jan 27 12:35:02 multinode-659000 dockerd[653]: time="2025-01-27T12:35:02.931712674Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0127 12:36:50.321206    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.079754338Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0127 12:36:50.321206    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.199112944Z" level=info msg="Loading containers: done."
	I0127 12:36:50.321206    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.227370410Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I0127 12:36:50.321206    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.227394111Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I0127 12:36:50.321264    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.227415612Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0127 12:36:50.321264    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.227924231Z" level=info msg="Daemon has completed initialization"
	I0127 12:36:50.321264    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.267619030Z" level=info msg="API listen on /var/run/docker.sock"
	I0127 12:36:50.321317    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.267851638Z" level=info msg="API listen on [::]:2376"
	I0127 12:36:50.321317    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 systemd[1]: Started Docker Application Container Engine.
	I0127 12:36:50.321317    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.208684124Z" level=info msg="Processing signal 'terminated'"
	I0127 12:36:50.321317    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.210887831Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0127 12:36:50.321317    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.211188432Z" level=info msg="Daemon shutdown complete"
	I0127 12:36:50.321399    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.211249132Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0127 12:36:50.321424    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.211349733Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0127 12:36:50.321424    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 systemd[1]: Stopping Docker Application Container Engine...
	I0127 12:36:50.321424    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 systemd[1]: docker.service: Deactivated successfully.
	I0127 12:36:50.321464    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 systemd[1]: Stopped Docker Application Container Engine.
	I0127 12:36:50.321464    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 systemd[1]: Starting Docker Application Container Engine...
	I0127 12:36:50.321464    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:29.270852796Z" level=info msg="Starting up"
	I0127 12:36:50.321514    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:29.271817099Z" level=info msg="containerd not running, starting managed containerd"
	I0127 12:36:50.321514    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:29.272921603Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1109
	I0127 12:36:50.321514    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.304741210Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0127 12:36:50.321590    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329258592Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0127 12:36:50.321590    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329353092Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0127 12:36:50.321590    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329390892Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0127 12:36:50.321651    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329406192Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:50.321651    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329428593Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:50.321651    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329441293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:50.321726    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329563193Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:50.321726    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329667793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:50.321780    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329687993Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:50.321780    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329698693Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:50.321780    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329723194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:50.321780    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329854194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:50.321886    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.332844104Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:50.321886    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.332945004Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:50.321950    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333117005Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:50.321950    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333187905Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0127 12:36:50.321950    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333222205Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0127 12:36:50.322003    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333244905Z" level=info msg="metadata content store policy set" policy=shared
	I0127 12:36:50.322003    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333669407Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0127 12:36:50.322003    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333741907Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0127 12:36:50.322060    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333760007Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0127 12:36:50.322060    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333804107Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0127 12:36:50.322060    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333825507Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0127 12:36:50.322113    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333876808Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0127 12:36:50.322113    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334348509Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0127 12:36:50.322113    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334487410Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0127 12:36:50.322170    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334670410Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0127 12:36:50.322170    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334694510Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0127 12:36:50.322223    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334722510Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.322223    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334740210Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.322223    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334754110Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.322223    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334768211Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.322288    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334783611Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.322288    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334797111Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.322339    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334827611Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.322339    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334839711Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0127 12:36:50.322339    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334900511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322339    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334918411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322339    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334939711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322421    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334956111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322443    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334972911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322443    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335000311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322483    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335303412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322483    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335328412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322483    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335345712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322483    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335365113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322538    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335379713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322538    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335394013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322538    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335408713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322593    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335432513Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0127 12:36:50.322593    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335458213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322593    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335473813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322649    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335509613Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0127 12:36:50.322649    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335706914Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0127 12:36:50.322649    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335751914Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0127 12:36:50.322724    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335766514Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0127 12:36:50.322822    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335779214Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0127 12:36:50.322877    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335790814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0127 12:36:50.322877    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335808914Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0127 12:36:50.322877    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335823714Z" level=info msg="NRI interface is disabled by configuration."
	I0127 12:36:50.322951    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.336050915Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0127 12:36:50.322951    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.336227915Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0127 12:36:50.322951    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.336312916Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0127 12:36:50.323006    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.336356016Z" level=info msg="containerd successfully booted in 0.033394s"
	I0127 12:36:50.323006    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.313483202Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0127 12:36:50.323006    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.352802934Z" level=info msg="Loading containers: start."
	I0127 12:36:50.323068    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.586901421Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0127 12:36:50.323068    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.690006868Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0127 12:36:50.323128    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.804531453Z" level=info msg="Loading containers: done."
	I0127 12:36:50.323128    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.832567747Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0127 12:36:50.323128    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.832684748Z" level=info msg="Daemon has completed initialization"
	I0127 12:36:50.323128    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.868895669Z" level=info msg="API listen on /var/run/docker.sock"
	I0127 12:36:50.323189    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 systemd[1]: Started Docker Application Container Engine.
	I0127 12:36:50.323189    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.869822273Z" level=info msg="API listen on [::]:2376"
	I0127 12:36:50.323189    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0127 12:36:50.323189    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0127 12:36:50.323248    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0127 12:36:50.323248    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Start docker client with request timeout 0s"
	I0127 12:36:50.323248    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0127 12:36:50.323248    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Loaded network plugin cni"
	I0127 12:36:50.323316    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0127 12:36:50.323316    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0127 12:36:50.323316    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0127 12:36:50.323375    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0127 12:36:50.323375    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Start cri-dockerd grpc backend"
	I0127 12:36:50.323375    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0127 12:36:50.323433    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:36Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-58667487b6-2jq9j_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"4c82c0ec4aeaa9b21462a8248326ae982d6f7a0aee31347f1a58d216f0335177\""
	I0127 12:36:50.323433    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:36Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-668d6bf9bc-2qw6w_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"4a53e133a1cd6ab9514cb15ac3c4f1d5683d17008b482cebb08bf4809e060709\""
	I0127 12:36:50.323493    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.148610487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.323493    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.149713190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.323550    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.149731191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.323550    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.149823291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.323604    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.227312151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.323604    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.227946754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.323604    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.228465355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.323663    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.229058857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.323663    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b770a357d98307d140bf1525f91cca5fa9278f7f9428b9b956db31e6a36de7f2/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:50.323717    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.326758786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.323717    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.326897686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.323717    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.327082287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.323772    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.327397788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.323772    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.340486032Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.323823    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.340542232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.323823    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.340557232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.323823    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.340640833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.323899    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/910315897d84204b3db03c56eaeac0c855a23f6250a406220a840c10e2dad7a7/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:50.323899    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5601285bb260a8ced44a77e9dbb10f08580841c917885470ec5941525f08ee76/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:50.323899    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cdf534e99b2bbcc52d3bf2ce73ef5d4299b5264cf0a050fa21ff7f6fe2bb3b2a/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:50.323955    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.671974447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.323955    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.672075247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.323955    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.672094947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.323955    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.673787353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.324029    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.761333147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.324029    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.761791949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.324084    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.761989149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.324084    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.763491554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.324084    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.875104030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.324141    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.875307231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.324141    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.879314144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.324193    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.879751245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.324193    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.905404632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.324269    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.905473732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.324269    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.905487532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.324269    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.905580032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.324347    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:41Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0127 12:36:50.324347    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:42.944884578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.324347    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:42.944962279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.324437    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:42.944975379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.324437    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:42.945417180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.324488    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.028307259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.324488    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.028541060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.324488    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.028779960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.324625    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.029212562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.324696    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.033020375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.324696    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.033338176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.324696    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.033463276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.324763    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.033775977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.324763    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/34d579bb511fec290478f20b13002063b43c1a71bd6f2f45f1d83bbd8ac971ab/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:50.324822    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b613e9a7a356580fd5381e358408317fd6120a119c23f3f196adda302e5ca97f/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:50.324822    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d43e4cc62e0877d4b65191623d58195cd33c60eff33c6e49e605f69620d5115f/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:50.324878    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.564400062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.324878    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.564959364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.324972    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.565260665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.324972    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.565864167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.325051    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.593549260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.325051    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.594548363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.325051    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.594809964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.325117    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.595677067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.325117    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.831064858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.325164    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.831237859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.325164    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.831252459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.325214    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.831462360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.325214    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:14.113708902Z" level=info msg="shim disconnected" id=9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f namespace=moby
	I0127 12:36:50.325214    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:14.113811702Z" level=warning msg="cleaning up after shim disconnected" id=9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f namespace=moby
	I0127 12:36:50.325290    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:14.113825002Z" level=info msg="cleaning up dead shim" namespace=moby
	I0127 12:36:50.325290    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 dockerd[1103]: time="2025-01-27T12:36:14.115914814Z" level=info msg="ignoring event" container=9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0127 12:36:50.325340    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:27.602318882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.325340    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:27.604079090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.325340    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:27.604098490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.325388    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:27.604656892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.325388    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.795612113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.325443    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.795786714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.325490    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.796654617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.325490    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.796995818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.325564    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.861006350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.325564    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.861082751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.325619    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.861094651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.325619    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.861334452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.325653    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:36:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6b22dbb5ef3e0d283203499fffad001c9c20c643564a55e5bfa5d6352f80e178/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:50.325683    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:36:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ef504f99724cba01531b3894329439ae069a4ccac272e31bfac333cc24e62c53/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0127 12:36:50.325683    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.321502068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.325683    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.321825070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.325683    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.321903471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.325683    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.322491776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.325683    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.384958874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:50.325683    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.385201176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:50.325683    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.385326577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.325683    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.385735080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:50.352785    9948 logs.go:123] Gathering logs for etcd [0ef2a3b50bae] ...
	I0127 12:36:50.352785    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ef2a3b50bae"
	I0127 12:36:50.378318    9948 command_runner.go:130] ! {"level":"warn","ts":"2025-01-27T12:35:38.248296Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0127 12:36:50.379336    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.248523Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.29.198.106:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.29.198.106:2380","--initial-cluster=multinode-659000=https://172.29.198.106:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.29.198.106:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.29.198.106:2380","--name=multinode-659000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0127 12:36:50.379336    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.249804Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0127 12:36:50.379435    9948 command_runner.go:130] ! {"level":"warn","ts":"2025-01-27T12:35:38.249933Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0127 12:36:50.379435    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.249951Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://172.29.198.106:2380"]}
	I0127 12:36:50.379435    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.250358Z","caller":"embed/etcd.go:497","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0127 12:36:50.379506    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.255871Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.29.198.106:2379"]}
	I0127 12:36:50.379572    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.258341Z","caller":"embed/etcd.go:311","msg":"starting an etcd server","etcd-version":"3.5.16","git-sha":"f20bbad","go-version":"go1.22.7","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-659000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.29.198.106:2380"],"listen-peer-urls":["https://172.29.198.106:2380"],"advertise-client-urls":["https://172.29.198.106:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.198.106:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initi
al-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0127 12:36:50.379656    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.282453Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"23.428079ms"}
	I0127 12:36:50.379714    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.322950Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0127 12:36:50.379714    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.352706Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"d020e240c474bd89","local-member-id":"925e6945be3a5b5b","commit-index":2090}
	I0127 12:36:50.379770    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.354002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b switched to configuration voters=()"}
	I0127 12:36:50.379770    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.354079Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b became follower at term 2"}
	I0127 12:36:50.379827    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.354103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 925e6945be3a5b5b [peers: [], term: 2, commit: 2090, applied: 0, lastindex: 2090, lastterm: 2]"}
	I0127 12:36:50.379827    9948 command_runner.go:130] ! {"level":"warn","ts":"2025-01-27T12:35:38.367343Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0127 12:36:50.379827    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.371532Z","caller":"mvcc/kvstore.go:346","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1388}
	I0127 12:36:50.379827    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.377112Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":1808}
	I0127 12:36:50.379827    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.386775Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0127 12:36:50.379914    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.395908Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"925e6945be3a5b5b","timeout":"7s"}
	I0127 12:36:50.379945    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.396497Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"925e6945be3a5b5b"}
	I0127 12:36:50.379945    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.396684Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"925e6945be3a5b5b","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	I0127 12:36:50.379945    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.396970Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	I0127 12:36:50.380016    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.399309Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0127 12:36:50.380045    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.401105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b switched to configuration voters=(10546983125613435739)"}
	I0127 12:36:50.380045    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.400045Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0127 12:36:50.380088    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.404834Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0127 12:36:50.380088    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.404888Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0127 12:36:50.380088    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.405566Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d020e240c474bd89","local-member-id":"925e6945be3a5b5b","added-peer-id":"925e6945be3a5b5b","added-peer-peer-urls":["https://172.29.204.17:2380"]}
	I0127 12:36:50.380143    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.405716Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d020e240c474bd89","local-member-id":"925e6945be3a5b5b","cluster-version":"3.5"}
	I0127 12:36:50.380143    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.405754Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0127 12:36:50.380203    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.407643Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0127 12:36:50.380255    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.408091Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"925e6945be3a5b5b","initial-advertise-peer-urls":["https://172.29.198.106:2380"],"listen-peer-urls":["https://172.29.198.106:2380"],"advertise-client-urls":["https://172.29.198.106:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.198.106:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0127 12:36:50.380310    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.408386Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0127 12:36:50.380310    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.408686Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"172.29.198.106:2380"}
	I0127 12:36:50.380310    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.408809Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"172.29.198.106:2380"}
	I0127 12:36:50.380373    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.355207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b is starting a new election at term 2"}
	I0127 12:36:50.380373    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.355615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b became pre-candidate at term 2"}
	I0127 12:36:50.380373    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.355770Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b received MsgPreVoteResp from 925e6945be3a5b5b at term 2"}
	I0127 12:36:50.380435    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.355926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b became candidate at term 3"}
	I0127 12:36:50.380481    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.356088Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b received MsgVoteResp from 925e6945be3a5b5b at term 3"}
	I0127 12:36:50.380481    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.356235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b became leader at term 3"}
	I0127 12:36:50.380481    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.356449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 925e6945be3a5b5b elected leader 925e6945be3a5b5b at term 3"}
	I0127 12:36:50.380481    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.368540Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"925e6945be3a5b5b","local-member-attributes":"{Name:multinode-659000 ClientURLs:[https://172.29.198.106:2379]}","request-path":"/0/members/925e6945be3a5b5b/attributes","cluster-id":"d020e240c474bd89","publish-timeout":"7s"}
	I0127 12:36:50.380580    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.369045Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0127 12:36:50.380580    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.371833Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0127 12:36:50.380580    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.372238Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0127 12:36:50.380580    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.374158Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0127 12:36:50.380580    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.383680Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0127 12:36:50.380580    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.391404Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0127 12:36:50.380580    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.392982Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.29.198.106:2379"}
	I0127 12:36:50.380683    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.399505Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0127 12:36:50.387951    9948 logs.go:123] Gathering logs for kube-scheduler [ed51c7eaa966] ...
	I0127 12:36:50.387951    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed51c7eaa966"
	I0127 12:36:50.412266    9948 command_runner.go:130] ! I0127 12:35:39.285954       1 serving.go:386] Generated self-signed cert in-memory
	I0127 12:36:50.413250    9948 command_runner.go:130] ! W0127 12:35:41.361191       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0127 12:36:50.413402    9948 command_runner.go:130] ! W0127 12:35:41.363231       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0127 12:36:50.413470    9948 command_runner.go:130] ! W0127 12:35:41.363467       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0127 12:36:50.413470    9948 command_runner.go:130] ! W0127 12:35:41.363598       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 12:36:50.413524    9948 command_runner.go:130] ! I0127 12:35:41.458309       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 12:36:50.413524    9948 command_runner.go:130] ! I0127 12:35:41.458594       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:50.413575    9948 command_runner.go:130] ! I0127 12:35:41.465036       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 12:36:50.413575    9948 command_runner.go:130] ! I0127 12:35:41.465587       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0127 12:36:50.413575    9948 command_runner.go:130] ! I0127 12:35:41.466480       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:36:50.413617    9948 command_runner.go:130] ! I0127 12:35:41.466554       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:50.413617    9948 command_runner.go:130] ! I0127 12:35:41.567642       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:36:50.415904    9948 logs.go:123] Gathering logs for kube-proxy [bbec7ccef7da] ...
	I0127 12:36:50.415904    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbec7ccef7da"
	I0127 12:36:50.441771    9948 command_runner.go:130] ! I0127 12:12:05.290111       1 server_linux.go:66] "Using iptables proxy"
	I0127 12:36:50.441771    9948 command_runner.go:130] ! E0127 12:12:05.321300       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0127 12:36:50.441771    9948 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0127 12:36:50.441771    9948 command_runner.go:130] ! 	add table ip kube-proxy
	I0127 12:36:50.441771    9948 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:50.441771    9948 command_runner.go:130] !  >
	I0127 12:36:50.441771    9948 command_runner.go:130] ! E0127 12:12:05.352123       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0127 12:36:50.441771    9948 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0127 12:36:50.441771    9948 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0127 12:36:50.441771    9948 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:50.441771    9948 command_runner.go:130] !  >
	I0127 12:36:50.441771    9948 command_runner.go:130] ! I0127 12:12:05.378799       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.204.17"]
	I0127 12:36:50.441771    9948 command_runner.go:130] ! E0127 12:12:05.378872       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 12:36:50.441771    9948 command_runner.go:130] ! I0127 12:12:05.470419       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 12:36:50.442748    9948 command_runner.go:130] ! I0127 12:12:05.470552       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 12:36:50.442748    9948 command_runner.go:130] ! I0127 12:12:05.470596       1 server_linux.go:170] "Using iptables Proxier"
	I0127 12:36:50.442748    9948 command_runner.go:130] ! I0127 12:12:05.475557       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 12:36:50.442748    9948 command_runner.go:130] ! I0127 12:12:05.476697       1 server.go:497] "Version info" version="v1.32.1"
	I0127 12:36:50.442748    9948 command_runner.go:130] ! I0127 12:12:05.476717       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:50.442748    9948 command_runner.go:130] ! I0127 12:12:05.478788       1 config.go:199] "Starting service config controller"
	I0127 12:36:50.442748    9948 command_runner.go:130] ! I0127 12:12:05.478844       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 12:36:50.442748    9948 command_runner.go:130] ! I0127 12:12:05.478916       1 config.go:105] "Starting endpoint slice config controller"
	I0127 12:36:50.442748    9948 command_runner.go:130] ! I0127 12:12:05.479018       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 12:36:50.442930    9948 command_runner.go:130] ! I0127 12:12:05.480053       1 config.go:329] "Starting node config controller"
	I0127 12:36:50.442930    9948 command_runner.go:130] ! I0127 12:12:05.480113       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 12:36:50.442930    9948 command_runner.go:130] ! I0127 12:12:05.579605       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 12:36:50.442930    9948 command_runner.go:130] ! I0127 12:12:05.579669       1 shared_informer.go:320] Caches are synced for service config
	I0127 12:36:50.442930    9948 command_runner.go:130] ! I0127 12:12:05.580463       1 shared_informer.go:320] Caches are synced for node config
	I0127 12:36:50.445304    9948 logs.go:123] Gathering logs for kindnet [d758000dda95] ...
	I0127 12:36:50.445304    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d758000dda95"
	I0127 12:36:50.477702    9948 command_runner.go:130] ! I0127 12:22:14.854106       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:14.855096       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:14.855184       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:24.859265       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:24.859464       1 main.go:301] handling current node
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:24.859638       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:24.859681       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:24.860150       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:24.860242       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:34.860201       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:34.860282       1 main.go:301] handling current node
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:34.860531       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:34.860551       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:34.861114       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:34.861204       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:44.853677       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:44.853737       1 main.go:301] handling current node
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:44.853761       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:44.853838       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:44.855661       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:44.855749       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:54.856510       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:54.856632       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:54.857002       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:54.857030       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:54.857252       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:22:54.857371       1 main.go:301] handling current node
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:04.859476       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:04.859579       1 main.go:301] handling current node
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:04.859615       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:04.859623       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:04.859972       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:04.859987       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:14.853396       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:14.853515       1 main.go:301] handling current node
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:14.853537       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:14.853546       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:14.853802       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:14.853843       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:24.853600       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:24.853883       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:24.854392       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:24.854484       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:24.854688       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:24.854773       1 main.go:301] handling current node
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:34.853542       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:34.853600       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:34.854132       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:34.854286       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:34.854787       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:34.854920       1 main.go:301] handling current node
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:44.856707       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:44.856833       1 main.go:301] handling current node
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:44.856869       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:44.856877       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:44.857371       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:44.857460       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:54.853590       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:54.853737       1 main.go:301] handling current node
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:54.853759       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:54.853768       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:54.854333       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:23:54.854403       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:24:04.862983       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:24:04.863248       1 main.go:301] handling current node
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:24:04.863599       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:24:04.863808       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:24:04.864418       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.478643    9948 command_runner.go:130] ! I0127 12:24:04.864558       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.479690    9948 command_runner.go:130] ! I0127 12:24:14.854114       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.479690    9948 command_runner.go:130] ! I0127 12:24:14.854152       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.479690    9948 command_runner.go:130] ! I0127 12:24:14.854412       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.479787    9948 command_runner.go:130] ! I0127 12:24:14.854490       1 main.go:301] handling current node
	I0127 12:36:50.479787    9948 command_runner.go:130] ! I0127 12:24:14.854619       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.479787    9948 command_runner.go:130] ! I0127 12:24:14.854711       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.479787    9948 command_runner.go:130] ! I0127 12:24:24.857372       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.479787    9948 command_runner.go:130] ! I0127 12:24:24.857503       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.479787    9948 command_runner.go:130] ! I0127 12:24:24.857861       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.479889    9948 command_runner.go:130] ! I0127 12:24:24.857991       1 main.go:301] handling current node
	I0127 12:36:50.479889    9948 command_runner.go:130] ! I0127 12:24:24.858058       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.479889    9948 command_runner.go:130] ! I0127 12:24:24.858126       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.479944    9948 command_runner.go:130] ! I0127 12:24:34.854371       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.479944    9948 command_runner.go:130] ! I0127 12:24:34.854425       1 main.go:301] handling current node
	I0127 12:36:50.479944    9948 command_runner.go:130] ! I0127 12:24:34.854444       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.479944    9948 command_runner.go:130] ! I0127 12:24:34.854451       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.479944    9948 command_runner.go:130] ! I0127 12:24:34.855276       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480010    9948 command_runner.go:130] ! I0127 12:24:34.855359       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480010    9948 command_runner.go:130] ! I0127 12:24:44.862967       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480010    9948 command_runner.go:130] ! I0127 12:24:44.863069       1 main.go:301] handling current node
	I0127 12:36:50.480066    9948 command_runner.go:130] ! I0127 12:24:44.863118       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.480066    9948 command_runner.go:130] ! I0127 12:24:44.863132       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.480066    9948 command_runner.go:130] ! I0127 12:24:44.863438       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480066    9948 command_runner.go:130] ! I0127 12:24:44.863559       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480131    9948 command_runner.go:130] ! I0127 12:24:54.856232       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480170    9948 command_runner.go:130] ! I0127 12:24:54.856343       1 main.go:301] handling current node
	I0127 12:36:50.480170    9948 command_runner.go:130] ! I0127 12:24:54.856417       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.480170    9948 command_runner.go:130] ! I0127 12:24:54.856429       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.480227    9948 command_runner.go:130] ! I0127 12:24:54.857056       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480227    9948 command_runner.go:130] ! I0127 12:24:54.857188       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480269    9948 command_runner.go:130] ! I0127 12:25:04.853438       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480269    9948 command_runner.go:130] ! I0127 12:25:04.853551       1 main.go:301] handling current node
	I0127 12:36:50.480269    9948 command_runner.go:130] ! I0127 12:25:04.853573       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.480303    9948 command_runner.go:130] ! I0127 12:25:04.853581       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.480303    9948 command_runner.go:130] ! I0127 12:25:04.853903       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480356    9948 command_runner.go:130] ! I0127 12:25:04.853979       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480356    9948 command_runner.go:130] ! I0127 12:25:14.854463       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480356    9948 command_runner.go:130] ! I0127 12:25:14.854571       1 main.go:301] handling current node
	I0127 12:36:50.480395    9948 command_runner.go:130] ! I0127 12:25:14.854614       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.480395    9948 command_runner.go:130] ! I0127 12:25:14.854630       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.480395    9948 command_runner.go:130] ! I0127 12:25:14.855124       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480432    9948 command_runner.go:130] ! I0127 12:25:14.855157       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480432    9948 command_runner.go:130] ! I0127 12:25:24.853742       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480432    9948 command_runner.go:130] ! I0127 12:25:24.853838       1 main.go:301] handling current node
	I0127 12:36:50.480480    9948 command_runner.go:130] ! I0127 12:25:24.853859       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.480480    9948 command_runner.go:130] ! I0127 12:25:24.853866       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.480480    9948 command_runner.go:130] ! I0127 12:25:24.854822       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480517    9948 command_runner.go:130] ! I0127 12:25:24.854982       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480517    9948 command_runner.go:130] ! I0127 12:25:34.853374       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.480517    9948 command_runner.go:130] ! I0127 12:25:34.853516       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.480564    9948 command_runner.go:130] ! I0127 12:25:34.853756       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480564    9948 command_runner.go:130] ! I0127 12:25:34.853919       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480600    9948 command_runner.go:130] ! I0127 12:25:34.854285       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480600    9948 command_runner.go:130] ! I0127 12:25:34.854360       1 main.go:301] handling current node
	I0127 12:36:50.480600    9948 command_runner.go:130] ! I0127 12:25:44.855075       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480648    9948 command_runner.go:130] ! I0127 12:25:44.855182       1 main.go:301] handling current node
	I0127 12:36:50.480648    9948 command_runner.go:130] ! I0127 12:25:44.855201       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.480648    9948 command_runner.go:130] ! I0127 12:25:44.855209       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.480684    9948 command_runner.go:130] ! I0127 12:25:44.856108       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480717    9948 command_runner.go:130] ! I0127 12:25:44.856191       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480717    9948 command_runner.go:130] ! I0127 12:25:54.854358       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480717    9948 command_runner.go:130] ! I0127 12:25:54.854550       1 main.go:301] handling current node
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:25:54.854584       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:25:54.854606       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:25:54.854829       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:25:54.854893       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:04.853425       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:04.853480       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:04.854150       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:04.854221       1 main.go:301] handling current node
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:04.854322       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:04.854350       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:14.853895       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:14.854577       1 main.go:301] handling current node
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:14.854615       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:14.854639       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:14.856224       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:14.856319       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:24.858046       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:24.858200       1 main.go:301] handling current node
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:24.858527       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:24.858599       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:24.859022       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:24.859118       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:34.853783       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:34.853853       1 main.go:301] handling current node
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:34.853871       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:34.853878       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:34.854193       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:34.854260       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:44.856492       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:44.856552       1 main.go:301] handling current node
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:44.856569       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:44.856575       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:44.857163       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:44.857246       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:54.858285       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:54.858431       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:54.859101       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:54.859322       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:54.859474       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:26:54.859544       1 main.go:301] handling current node
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:27:04.858831       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.480753    9948 command_runner.go:130] ! I0127 12:27:04.858967       1 main.go:301] handling current node
	I0127 12:36:50.481283    9948 command_runner.go:130] ! I0127 12:27:04.859484       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.481283    9948 command_runner.go:130] ! I0127 12:27:04.859592       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.481340    9948 command_runner.go:130] ! I0127 12:27:04.860213       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.481340    9948 command_runner.go:130] ! I0127 12:27:04.860314       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.481340    9948 command_runner.go:130] ! I0127 12:27:14.854313       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.481340    9948 command_runner.go:130] ! I0127 12:27:14.854366       1 main.go:301] handling current node
	I0127 12:36:50.481410    9948 command_runner.go:130] ! I0127 12:27:14.854386       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.481410    9948 command_runner.go:130] ! I0127 12:27:14.854394       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.481459    9948 command_runner.go:130] ! I0127 12:27:14.854883       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.481459    9948 command_runner.go:130] ! I0127 12:27:14.855322       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:24.859182       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:24.859342       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:24.859757       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:24.859824       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:24.860078       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:24.860255       1 main.go:301] handling current node
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:34.854206       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:34.854462       1 main.go:301] handling current node
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:34.854567       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:34.854657       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:34.855188       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:34.855233       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:44.861342       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:44.861572       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:44.862224       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:44.862399       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:44.862648       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:44.862687       1 main.go:301] handling current node
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:54.853605       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:54.853658       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:54.853924       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:54.854125       1 main.go:301] handling current node
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:54.854203       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:27:54.854216       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:28:04.859858       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:28:04.859922       1 main.go:301] handling current node
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:28:04.859984       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:28:04.860038       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:28:04.860336       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:28:04.860450       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:28:14.853470       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:28:14.853607       1 main.go:301] handling current node
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:28:14.853627       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:28:14.853634       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:28:14.854800       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.481486    9948 command_runner.go:130] ! I0127 12:28:14.854899       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.482067    9948 command_runner.go:130] ! I0127 12:28:24.853786       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.482067    9948 command_runner.go:130] ! I0127 12:28:24.853841       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.482067    9948 command_runner.go:130] ! I0127 12:28:24.854051       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.482067    9948 command_runner.go:130] ! I0127 12:28:24.854078       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.482067    9948 command_runner.go:130] ! I0127 12:28:24.854192       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.482067    9948 command_runner.go:130] ! I0127 12:28:24.854297       1 main.go:301] handling current node
	I0127 12:36:50.482067    9948 command_runner.go:130] ! I0127 12:28:34.853571       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.482067    9948 command_runner.go:130] ! I0127 12:28:34.853730       1 main.go:301] handling current node
	I0127 12:36:50.482232    9948 command_runner.go:130] ! I0127 12:28:34.853756       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.482232    9948 command_runner.go:130] ! I0127 12:28:34.853765       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.482232    9948 command_runner.go:130] ! I0127 12:28:34.853988       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.482232    9948 command_runner.go:130] ! I0127 12:28:34.854180       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.482232    9948 command_runner.go:130] ! I0127 12:28:44.853630       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.482314    9948 command_runner.go:130] ! I0127 12:28:44.854161       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.482314    9948 command_runner.go:130] ! I0127 12:28:44.854753       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.482314    9948 command_runner.go:130] ! I0127 12:28:44.854886       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.482314    9948 command_runner.go:130] ! I0127 12:28:44.855270       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.482371    9948 command_runner.go:130] ! I0127 12:28:44.855393       1 main.go:301] handling current node
	I0127 12:36:50.482371    9948 command_runner.go:130] ! I0127 12:28:54.856731       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.482371    9948 command_runner.go:130] ! I0127 12:28:54.856780       1 main.go:301] handling current node
	I0127 12:36:50.482587    9948 command_runner.go:130] ! I0127 12:28:54.856800       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.482650    9948 command_runner.go:130] ! I0127 12:28:54.856807       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.482650    9948 command_runner.go:130] ! I0127 12:28:54.857466       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.482693    9948 command_runner.go:130] ! I0127 12:28:54.857531       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.482693    9948 command_runner.go:130] ! I0127 12:29:04.853996       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.482746    9948 command_runner.go:130] ! I0127 12:29:04.854093       1 main.go:301] handling current node
	I0127 12:36:50.482746    9948 command_runner.go:130] ! I0127 12:29:04.854113       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.482787    9948 command_runner.go:130] ! I0127 12:29:04.854120       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.482787    9948 command_runner.go:130] ! I0127 12:29:04.854865       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.482787    9948 command_runner.go:130] ! I0127 12:29:04.855000       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.482839    9948 command_runner.go:130] ! I0127 12:29:14.853874       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.482839    9948 command_runner.go:130] ! I0127 12:29:14.854279       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.482895    9948 command_runner.go:130] ! I0127 12:29:14.854677       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.482895    9948 command_runner.go:130] ! I0127 12:29:14.854896       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.482895    9948 command_runner.go:130] ! I0127 12:29:14.855469       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.482941    9948 command_runner.go:130] ! I0127 12:29:14.856845       1 main.go:301] handling current node
	I0127 12:36:50.482941    9948 command_runner.go:130] ! I0127 12:29:24.853660       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.482941    9948 command_runner.go:130] ! I0127 12:29:24.853766       1 main.go:301] handling current node
	I0127 12:36:50.482995    9948 command_runner.go:130] ! I0127 12:29:24.853786       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.483070    9948 command_runner.go:130] ! I0127 12:29:24.853793       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.483070    9948 command_runner.go:130] ! I0127 12:29:24.854261       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.483070    9948 command_runner.go:130] ! I0127 12:29:24.854541       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.483106    9948 command_runner.go:130] ! I0127 12:29:34.861616       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.483106    9948 command_runner.go:130] ! I0127 12:29:34.861807       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.483106    9948 command_runner.go:130] ! I0127 12:29:34.862166       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.483153    9948 command_runner.go:130] ! I0127 12:29:34.862228       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.483153    9948 command_runner.go:130] ! I0127 12:29:34.862400       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.483190    9948 command_runner.go:130] ! I0127 12:29:34.862455       1 main.go:301] handling current node
	I0127 12:36:50.483190    9948 command_runner.go:130] ! I0127 12:29:44.854294       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.483190    9948 command_runner.go:130] ! I0127 12:29:44.854418       1 main.go:301] handling current node
	I0127 12:36:50.483190    9948 command_runner.go:130] ! I0127 12:29:44.854439       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.483237    9948 command_runner.go:130] ! I0127 12:29:44.854448       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.483237    9948 command_runner.go:130] ! I0127 12:29:44.854699       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.483237    9948 command_runner.go:130] ! I0127 12:29:44.854776       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.483272    9948 command_runner.go:130] ! I0127 12:29:54.853707       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.483272    9948 command_runner.go:130] ! I0127 12:29:54.853780       1 main.go:301] handling current node
	I0127 12:36:50.483272    9948 command_runner.go:130] ! I0127 12:29:54.853914       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.483314    9948 command_runner.go:130] ! I0127 12:29:54.854022       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.483314    9948 command_runner.go:130] ! I0127 12:29:54.854423       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:29:54.854566       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:04.853625       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:04.853820       1 main.go:301] handling current node
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:04.854002       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:04.854301       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:04.854878       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:04.854986       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:14.853537       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:14.853729       1 main.go:301] handling current node
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:14.853749       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:14.853756       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:14.855013       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:14.855147       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:24.853563       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:24.853757       1 main.go:301] handling current node
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:24.853779       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:24.853786       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:24.854220       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:24.854327       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:34.858899       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:34.859124       1 main.go:301] handling current node
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:34.859146       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:34.859676       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:34.860572       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:34.860819       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:44.858769       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:44.858890       1 main.go:301] handling current node
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:44.858912       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:44.858920       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:44.859720       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:44.859809       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:54.855090       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:54.855134       1 main.go:301] handling current node
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:54.855151       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:54.855157       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:54.855561       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:30:54.855573       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:31:04.854121       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:31:04.854237       1 main.go:301] handling current node
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:31:04.854256       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:31:04.854263       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:31:04.854424       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:50.483351    9948 command_runner.go:130] ! I0127 12:31:04.854452       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:50.483905    9948 command_runner.go:130] ! I0127 12:31:04.854544       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.29.206.88 Flags: [] Table: 0 Realm: 0} 
	I0127 12:36:50.483905    9948 command_runner.go:130] ! I0127 12:31:14.853651       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.483905    9948 command_runner.go:130] ! I0127 12:31:14.853750       1 main.go:301] handling current node
	I0127 12:36:50.483990    9948 command_runner.go:130] ! I0127 12:31:14.853771       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.483990    9948 command_runner.go:130] ! I0127 12:31:14.853778       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.483990    9948 command_runner.go:130] ! I0127 12:31:14.854005       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:50.483990    9948 command_runner.go:130] ! I0127 12:31:14.854084       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:50.483990    9948 command_runner.go:130] ! I0127 12:31:24.854114       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.484051    9948 command_runner.go:130] ! I0127 12:31:24.854161       1 main.go:301] handling current node
	I0127 12:36:50.484087    9948 command_runner.go:130] ! I0127 12:31:24.854212       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:24.854223       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:24.854591       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:24.854666       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:34.862705       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:34.862793       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:34.863105       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:34.863140       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:34.863334       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:34.863362       1 main.go:301] handling current node
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:44.855275       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:44.855421       1 main.go:301] handling current node
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:44.855462       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:44.855496       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:44.856579       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:44.856690       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:54.856288       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:54.856579       1 main.go:301] handling current node
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:54.856914       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:54.857065       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:54.857508       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:31:54.857553       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:04.853556       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:04.853630       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:04.854583       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:04.854615       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:04.857114       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:04.857217       1 main.go:301] handling current node
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:14.854183       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:14.854348       1 main.go:301] handling current node
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:14.854376       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:14.854402       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:14.854890       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:14.854992       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:24.853770       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:24.854222       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:24.854498       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:24.854573       1 main.go:301] handling current node
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:24.854606       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:24.854613       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:34.853556       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:34.853715       1 main.go:301] handling current node
	I0127 12:36:50.484111    9948 command_runner.go:130] ! I0127 12:32:34.853749       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.484711    9948 command_runner.go:130] ! I0127 12:32:34.853879       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.484711    9948 command_runner.go:130] ! I0127 12:32:34.854386       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:50.484891    9948 command_runner.go:130] ! I0127 12:32:34.854469       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:50.484891    9948 command_runner.go:130] ! I0127 12:32:44.853378       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.484928    9948 command_runner.go:130] ! I0127 12:32:44.853424       1 main.go:301] handling current node
	I0127 12:36:50.484928    9948 command_runner.go:130] ! I0127 12:32:44.853441       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.484928    9948 command_runner.go:130] ! I0127 12:32:44.853447       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.484928    9948 command_runner.go:130] ! I0127 12:32:44.853735       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:50.484979    9948 command_runner.go:130] ! I0127 12:32:44.853765       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:50.485015    9948 command_runner.go:130] ! I0127 12:32:54.859317       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.485015    9948 command_runner.go:130] ! I0127 12:32:54.859396       1 main.go:301] handling current node
	I0127 12:36:50.485015    9948 command_runner.go:130] ! I0127 12:32:54.859415       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.485062    9948 command_runner.go:130] ! I0127 12:32:54.859421       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.485062    9948 command_runner.go:130] ! I0127 12:32:54.859756       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:50.485098    9948 command_runner.go:130] ! I0127 12:32:54.859853       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:50.485098    9948 command_runner.go:130] ! I0127 12:33:04.861975       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.485098    9948 command_runner.go:130] ! I0127 12:33:04.862085       1 main.go:301] handling current node
	I0127 12:36:50.485098    9948 command_runner.go:130] ! I0127 12:33:04.862106       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.485145    9948 command_runner.go:130] ! I0127 12:33:04.862113       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.485145    9948 command_runner.go:130] ! I0127 12:33:04.862780       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:50.485181    9948 command_runner.go:130] ! I0127 12:33:04.862861       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:50.485181    9948 command_runner.go:130] ! I0127 12:33:14.853823       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:50.485181    9948 command_runner.go:130] ! I0127 12:33:14.853859       1 main.go:301] handling current node
	I0127 12:36:50.485228    9948 command_runner.go:130] ! I0127 12:33:14.853877       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:50.485228    9948 command_runner.go:130] ! I0127 12:33:14.853884       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:50.485264    9948 command_runner.go:130] ! I0127 12:33:14.854153       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:50.485264    9948 command_runner.go:130] ! I0127 12:33:14.854165       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:50.501552    9948 logs.go:123] Gathering logs for container status ...
	I0127 12:36:50.501552    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:36:50.567788    9948 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0127 12:36:50.567917    9948 command_runner.go:130] > 528243cca8bfb       8c811b4aec35f                                                                                         3 seconds ago        Running             busybox                   1                   ef504f99724cb       busybox-58667487b6-2jq9j
	I0127 12:36:50.567917    9948 command_runner.go:130] > b3a9ed6e130c0       c69fa2e9cbf5f                                                                                         3 seconds ago        Running             coredns                   1                   6b22dbb5ef3e0       coredns-668d6bf9bc-2qw6w
	I0127 12:36:50.567917    9948 command_runner.go:130] > 389606c183b19       6e38f40d628db                                                                                         23 seconds ago       Running             storage-provisioner       2                   b613e9a7a3565       storage-provisioner
	I0127 12:36:50.568044    9948 command_runner.go:130] > 373bec67270fb       50415e5d05f05                                                                                         About a minute ago   Running             kindnet-cni               1                   d43e4cc62e087       kindnet-z2hqq
	I0127 12:36:50.568044    9948 command_runner.go:130] > 9b2db1d0cb61c       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   b613e9a7a3565       storage-provisioner
	I0127 12:36:50.568119    9948 command_runner.go:130] > 0283b35dee3cc       e29f9c7391fd9                                                                                         About a minute ago   Running             kube-proxy                1                   34d579bb511fe       kube-proxy-s46mv
	I0127 12:36:50.568155    9948 command_runner.go:130] > ea993630a3109       95c0bda56fc4d                                                                                         About a minute ago   Running             kube-apiserver            0                   5601285bb260a       kube-apiserver-multinode-659000
	I0127 12:36:50.568201    9948 command_runner.go:130] > 0ef2a3b50bae8       a9e7e6b294baf                                                                                         About a minute ago   Running             etcd                      0                   cdf534e99b2bb       etcd-multinode-659000
	I0127 12:36:50.568235    9948 command_runner.go:130] > ed51c7eaa9666       2b0d6572d062c                                                                                         About a minute ago   Running             kube-scheduler            1                   910315897d842       kube-scheduler-multinode-659000
	I0127 12:36:50.568235    9948 command_runner.go:130] > 8d4872cda28de       019ee182b58e2                                                                                         About a minute ago   Running             kube-controller-manager   1                   b770a357d9830       kube-controller-manager-multinode-659000
	I0127 12:36:50.568235    9948 command_runner.go:130] > 998a64b2baa2d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   4c82c0ec4aeaa       busybox-58667487b6-2jq9j
	I0127 12:36:50.568235    9948 command_runner.go:130] > f818dd15d8b02       c69fa2e9cbf5f                                                                                         24 minutes ago       Exited              coredns                   0                   4a53e133a1cd6       coredns-668d6bf9bc-2qw6w
	I0127 12:36:50.568235    9948 command_runner.go:130] > d758000dda95d       kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108              24 minutes ago       Exited              kindnet-cni               0                   f2d0bd65fe50d       kindnet-z2hqq
	I0127 12:36:50.568235    9948 command_runner.go:130] > bbec7ccef7da5       e29f9c7391fd9                                                                                         24 minutes ago       Exited              kube-proxy                0                   319cddeebceb6       kube-proxy-s46mv
	I0127 12:36:50.568235    9948 command_runner.go:130] > a16e06a038601       2b0d6572d062c                                                                                         24 minutes ago       Exited              kube-scheduler            0                   5423fc5113290       kube-scheduler-multinode-659000
	I0127 12:36:50.568235    9948 command_runner.go:130] > e07a66f8f6196       019ee182b58e2                                                                                         24 minutes ago       Exited              kube-controller-manager   0                   1bd5bf99bede3       kube-controller-manager-multinode-659000
	I0127 12:36:50.570849    9948 logs.go:123] Gathering logs for kubelet ...
	I0127 12:36:50.570939    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:36:50.608551    9948 command_runner.go:130] > Jan 27 12:35:32 multinode-659000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0127 12:36:50.608689    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1507]: I0127 12:35:33.096330    1507 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0127 12:36:50.608689    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1507]: I0127 12:35:33.097069    1507 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:50.608876    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1507]: I0127 12:35:33.098504    1507 server.go:954] "Client rotation is on, will bootstrap in background"
	I0127 12:36:50.608915    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1507]: E0127 12:35:33.099084    1507 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0127 12:36:50.608949    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:50.609007    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0127 12:36:50.609041    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0127 12:36:50.609041    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1565]: I0127 12:35:33.855505    1565 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1565]: I0127 12:35:33.856023    1565 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1565]: I0127 12:35:33.856456    1565 server.go:954] "Client rotation is on, will bootstrap in background"
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1565]: E0127 12:35:33.856573    1565 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:34 multinode-659000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.167839    1648 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.168570    1648 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.169526    1648 server.go:954] "Client rotation is on, will bootstrap in background"
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.171330    1648 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.190537    1648 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.208219    1648 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.208354    1648 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.217489    1648 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.217603    1648 server.go:841] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.218319    1648 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.218396    1648 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-659000","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.218720    1648 topology_manager.go:138] "Creating topology manager with none policy"
	I0127 12:36:50.609070    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.218780    1648 container_manager_linux.go:304] "Creating device plugin manager"
	I0127 12:36:50.609671    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.219430    1648 state_mem.go:36] "Initialized new in-memory state store"
	I0127 12:36:50.609671    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.221396    1648 kubelet.go:446] "Attempting to sync node with API server"
	I0127 12:36:50.609736    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.221465    1648 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0127 12:36:50.609736    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.221524    1648 kubelet.go:352] "Adding apiserver pod source"
	I0127 12:36:50.609736    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.221568    1648 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0127 12:36:50.609873    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.230949    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-659000&limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:50.609910    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.231085    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-659000&limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:50.609910    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.232363    1648 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="docker" version="27.4.0" apiVersion="v1"
	I0127 12:36:50.609960    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.236967    1648 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0127 12:36:50.609996    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.237190    1648 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0127 12:36:50.609996    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.245589    1648 watchdog_linux.go:99] "Systemd watchdog is not enabled"
	I0127 12:36:50.610045    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.245760    1648 server.go:1287] "Started kubelet"
	I0127 12:36:50.610081    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.246317    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:50.610129    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.246411    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.246814    1648 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.247495    1648 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.249106    1648 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.260914    1648 server.go:169] "Starting to listen" address="0.0.0.0" port=10250
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.262947    1648 server.go:490] "Adding debug handlers to kubelet server"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.264052    1648 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.267083    1648 volume_manager.go:297] "Starting Kubelet Volume Manager"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.267485    1648 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"multinode-659000\" not found"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.270946    1648 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.29.198.106:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-659000.181e8cd12d2fa1af  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-659000,UID:multinode-659000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-659000,},FirstTimestamp:2025-01-27 12:35:36.245739951 +0000 UTC m=+0.150414507,LastTimestamp:2025-01-27 12:35:36.245739951 +0000 UTC m=+0.150414507,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-6
59000,}"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.275270    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-659000?timeout=10s\": dial tcp 172.29.198.106:8443: connect: connection refused" interval="200ms"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.275715    1648 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.280615    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.280911    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.282354    1648 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.282424    1648 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.282441    1648 factory.go:221] Registration of the systemd container factory successfully
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.345823    1648 reconciler.go:26] "Reconciler: start to sync state"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.348883    1648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.352701    1648 cpu_manager.go:221] "Starting CPU manager" policy="none"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.352736    1648 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.352866    1648 state_mem.go:36] "Initialized new in-memory state store"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353577    1648 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353729    1648 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353769    1648 policy_none.go:49] "None policy: Start"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353902    1648 memory_manager.go:186] "Starting memorymanager" policy="None"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353967    1648 state_mem.go:35] "Initializing new in-memory state store"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.354751    1648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.354791    1648 status_manager.go:227] "Starting to sync pod status with apiserver"
	I0127 12:36:50.610167    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.354811    1648 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I0127 12:36:50.610744    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.354819    1648 kubelet.go:2388] "Starting kubelet main sync loop"
	I0127 12:36:50.610744    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.354862    1648 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0127 12:36:50.610807    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.355393    1648 state_mem.go:75] "Updated machine memory state"
	I0127 12:36:50.610807    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.358802    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:50.610807    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.358857    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:50.610914    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.371233    1648 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"multinode-659000\" not found"
	I0127 12:36:50.610951    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.373395    1648 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0127 12:36:50.611001    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.373786    1648 eviction_manager.go:189] "Eviction manager: starting control loop"
	I0127 12:36:50.611038    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.373887    1648 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0127 12:36:50.611078    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.380088    1648 iptables.go:577] "Could not set up iptables canary" err=<
	I0127 12:36:50.611078    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.380760    1648 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.380984    1648 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-659000\" not found"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.382902    1648 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.468172    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.468821    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c82c0ec4aeaa9b21462a8248326ae982d6f7a0aee31347f1a58d216f0335177"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.468934    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2d0bd65fe50d3b8a64acf8ee065aa49d1a51b768c5fe6fe9532d26fa35aa7b1"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.468988    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bd5bf99bede3e691e572fc4b8a37f4f42f8a9b2520adf8bc87bdf76e8258a4b"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.469050    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5423fc5113290b937df9b531c5fbd748c5d927fd5e170e8126b67bae6a814384"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.470252    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.475717    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.477090    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-659000?timeout=10s\": dial tcp 172.29.198.106:8443: connect: connection refused" interval="400ms"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.480196    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.198.106:8443: connect: connection refused" node="multinode-659000"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.487429    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc9ef8ee86ec2e354006c4c56f82fe9ec4df472096628ad620faba06fa0b1ff8"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.508448    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a53e133a1cd6ab9514cb15ac3c4f1d5683d17008b482cebb08bf4809e060709"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.523288    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="319cddeebceb6ec82b5865f1c67eaf88948a282ace1113869910f5bf8c717d83"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.545844    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b522c4c9f4c776ea35298b9eaf7c05d64bddd6f385e12252bdf6aada9a3e20d"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.566476    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e6c90fc43fa6c0754218ff1c4162045d-kubeconfig\") pod \"kube-scheduler-multinode-659000\" (UID: \"e6c90fc43fa6c0754218ff1c4162045d\") " pod="kube-system/kube-scheduler-multinode-659000"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.566534    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b9fbd177058ba298cde2a92c4ef5c601-k8s-certs\") pod \"kube-apiserver-multinode-659000\" (UID: \"b9fbd177058ba298cde2a92c4ef5c601\") " pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.566560    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-kubeconfig\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:50.611115    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567472    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:50.611701    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567527    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/575cefa3aa8017dce576fa244e719a4e-etcd-certs\") pod \"etcd-multinode-659000\" (UID: \"575cefa3aa8017dce576fa244e719a4e\") " pod="kube-system/etcd-multinode-659000"
	I0127 12:36:50.611765    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567546    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/575cefa3aa8017dce576fa244e719a4e-etcd-data\") pod \"etcd-multinode-659000\" (UID: \"575cefa3aa8017dce576fa244e719a4e\") " pod="kube-system/etcd-multinode-659000"
	I0127 12:36:50.611765    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567563    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b9fbd177058ba298cde2a92c4ef5c601-ca-certs\") pod \"kube-apiserver-multinode-659000\" (UID: \"b9fbd177058ba298cde2a92c4ef5c601\") " pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:50.611885    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567580    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-ca-certs\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:50.611921    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567687    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-flexvolume-dir\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:50.611969    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567720    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-k8s-certs\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:50.612005    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567745    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b9fbd177058ba298cde2a92c4ef5c601-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-659000\" (UID: \"b9fbd177058ba298cde2a92c4ef5c601\") " pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:50.612054    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567166    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51ee4649b24aa281b3767c049c3c1d4063e516b98501648152da39ee45cb0b26"
	I0127 12:36:50.612089    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.569350    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.612138    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.570289    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.612138    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.681872    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:50.612174    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.682569    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.198.106:8443: connect: connection refused" node="multinode-659000"
	I0127 12:36:50.612222    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.878668    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-659000?timeout=10s\": dial tcp 172.29.198.106:8443: connect: connection refused" interval="800ms"
	I0127 12:36:50.612475    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: W0127 12:35:37.056372    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:50.612504    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.056534    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:50.612585    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: I0127 12:35:37.084276    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:50.612612    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.085344    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.198.106:8443: connect: connection refused" node="multinode-659000"
	I0127 12:36:50.612652    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: W0127 12:35:37.281985    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:50.612688    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.282078    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:50.612736    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: W0127 12:35:37.629266    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-659000&limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:50.612815    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.629409    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-659000&limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:50.612851    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: W0127 12:35:37.673700    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:50.612898    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.673876    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:50.612934    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.680515    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-659000?timeout=10s\": dial tcp 172.29.198.106:8443: connect: connection refused" interval="1.6s"
	I0127 12:36:50.612934    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: I0127 12:35:37.887498    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:50.612982    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.888458    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.198.106:8443: connect: connection refused" node="multinode-659000"
	I0127 12:36:50.613017    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: E0127 12:35:39.058364    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.613065    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: E0127 12:35:39.084210    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.613065    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: E0127 12:35:39.099659    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.613149    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: E0127 12:35:39.112572    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.613185    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: I0127 12:35:39.489967    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:50.613234    9948 command_runner.go:130] > Jan 27 12:35:40 multinode-659000 kubelet[1648]: E0127 12:35:40.123734    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.613269    9948 command_runner.go:130] > Jan 27 12:35:40 multinode-659000 kubelet[1648]: E0127 12:35:40.124212    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.613269    9948 command_runner.go:130] > Jan 27 12:35:40 multinode-659000 kubelet[1648]: E0127 12:35:40.124507    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.613315    9948 command_runner.go:130] > Jan 27 12:35:40 multinode-659000 kubelet[1648]: E0127 12:35:40.124790    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.613351    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.138584    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.613351    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.139346    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.613398    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.139719    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:50.613437    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.469180    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-659000"
	I0127 12:36:50.613486    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.513020    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-multinode-659000\" already exists" pod="kube-system/etcd-multinode-659000"
	I0127 12:36:50.613486    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.513064    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:50.613522    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.538800    1648 kubelet_node_status.go:125] "Node was previously registered" node="multinode-659000"
	I0127 12:36:50.613522    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.538905    1648 kubelet_node_status.go:79] "Successfully registered node" node="multinode-659000"
	I0127 12:36:50.613565    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.538949    1648 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0127 12:36:50.613601    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.539897    1648 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0127 12:36:50.613601    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.540655    1648 setters.go:602] "Node became not ready" node="multinode-659000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-27T12:35:41Z","lastTransitionTime":"2025-01-27T12:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0127 12:36:50.613683    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.555833    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-multinode-659000\" already exists" pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:50.613683    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.555924    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:50.613724    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.574323    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-multinode-659000\" already exists" pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:50.613760    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.574484    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-multinode-659000"
	I0127 12:36:50.613760    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.589698    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-multinode-659000\" already exists" pod="kube-system/kube-scheduler-multinode-659000"
	I0127 12:36:50.613807    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.247993    1648 apiserver.go:52] "Watching apiserver"
	I0127 12:36:50.613843    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.255092    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-659000" podUID="f19e9efc-57cc-4e2a-b365-920592a7f352"
	I0127 12:36:50.613843    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.257281    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.613891    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.257504    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.613926    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.261197    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/etcd-multinode-659000" podUID="d2a9c448-86a1-48e3-8b48-345c937e5bb4"
	I0127 12:36:50.613973    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.277187    1648 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0127 12:36:50.613973    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.304401    1648 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/etcd-multinode-659000"
	I0127 12:36:50.614008    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.304607    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-659000"
	I0127 12:36:50.614055    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.309849    1648 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:50.614090    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.309963    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:50.614090    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.343249    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae3b8daf-d674-4cfe-8652-cb5ff6ba8615-lib-modules\") pod \"kube-proxy-s46mv\" (UID: \"ae3b8daf-d674-4cfe-8652-cb5ff6ba8615\") " pod="kube-system/kube-proxy-s46mv"
	I0127 12:36:50.614133    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.343617    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9b617a9c-e2b8-45fd-bee2-45cb03d4cd42-cni-cfg\") pod \"kindnet-z2hqq\" (UID: \"9b617a9c-e2b8-45fd-bee2-45cb03d4cd42\") " pod="kube-system/kindnet-z2hqq"
	I0127 12:36:50.614170    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.343779    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b617a9c-e2b8-45fd-bee2-45cb03d4cd42-lib-modules\") pod \"kindnet-z2hqq\" (UID: \"9b617a9c-e2b8-45fd-bee2-45cb03d4cd42\") " pod="kube-system/kindnet-z2hqq"
	I0127 12:36:50.614271    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.343961    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae3b8daf-d674-4cfe-8652-cb5ff6ba8615-xtables-lock\") pod \"kube-proxy-s46mv\" (UID: \"ae3b8daf-d674-4cfe-8652-cb5ff6ba8615\") " pod="kube-system/kube-proxy-s46mv"
	I0127 12:36:50.614334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.344263    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b617a9c-e2b8-45fd-bee2-45cb03d4cd42-xtables-lock\") pod \"kindnet-z2hqq\" (UID: \"9b617a9c-e2b8-45fd-bee2-45cb03d4cd42\") " pod="kube-system/kindnet-z2hqq"
	I0127 12:36:50.614374    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.344443    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bcfd7913-1bc0-4c24-882f-2be92ec9b046-tmp\") pod \"storage-provisioner\" (UID: \"bcfd7913-1bc0-4c24-882f-2be92ec9b046\") " pod="kube-system/storage-provisioner"
	I0127 12:36:50.614409    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.345456    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:50.614481    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.345573    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:42.845554363 +0000 UTC m=+6.750229019 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:50.614519    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.362165    1648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bf31ca1befb4fb3e8f2fd27458a3b80" path="/var/lib/kubelet/pods/6bf31ca1befb4fb3e8f2fd27458a3b80/volumes"
	I0127 12:36:50.614519    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.363294    1648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7291ea72d8be6e47ed8b536906d73549" path="/var/lib/kubelet/pods/7291ea72d8be6e47ed8b536906d73549/volumes"
	I0127 12:36:50.614590    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.396667    1648 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	I0127 12:36:50.614590    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.400478    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.614633    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.400505    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.614737    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.400550    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:42.900534148 +0000 UTC m=+6.805208804 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.614874    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.494698    1648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-659000" podStartSLOduration=0.494540064 podStartE2EDuration="494.540064ms" podCreationTimestamp="2025-01-27 12:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 12:35:42.473709794 +0000 UTC m=+6.378384350" watchObservedRunningTime="2025-01-27 12:35:42.494540064 +0000 UTC m=+6.399214620"
	I0127 12:36:50.614934    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.494964    1648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-659000" podStartSLOduration=0.494955765 podStartE2EDuration="494.955765ms" podCreationTimestamp="2025-01-27 12:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 12:35:42.493805361 +0000 UTC m=+6.398480017" watchObservedRunningTime="2025-01-27 12:35:42.494955765 +0000 UTC m=+6.399630321"
	I0127 12:36:50.614976    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.849608    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:50.615030    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.849827    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:43.849803559 +0000 UTC m=+7.754478115 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:50.615030    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.951539    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.615085    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.951579    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.951637    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:43.951620201 +0000 UTC m=+7.856294757 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: I0127 12:35:43.230846    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b613e9a7a356580fd5381e358408317fd6120a119c23f3f196adda302e5ca97f"
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: I0127 12:35:43.240666    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34d579bb511fec290478f20b13002063b43c1a71bd6f2f45f1d83bbd8ac971ab"
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.588436    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: I0127 12:35:43.594121    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d43e4cc62e0877d4b65191623d58195cd33c60eff33c6e49e605f69620d5115f"
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: I0127 12:35:43.594816    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-659000" podUID="f19e9efc-57cc-4e2a-b365-920592a7f352"
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.861607    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.861754    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:45.861734662 +0000 UTC m=+9.766409318 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.962791    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.962845    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.963033    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:45.962955102 +0000 UTC m=+9.867629758 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:44 multinode-659000 kubelet[1648]: E0127 12:35:44.356390    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.355639    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.883867    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:50.615204    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.883991    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:49.883972962 +0000 UTC m=+13.788647618 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:50.615786    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.984260    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.615786    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.984313    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.615786    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.984377    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:49.984359299 +0000 UTC m=+13.889033855 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.615786    9948 command_runner.go:130] > Jan 27 12:35:46 multinode-659000 kubelet[1648]: E0127 12:35:46.358731    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.615948    9948 command_runner.go:130] > Jan 27 12:35:46 multinode-659000 kubelet[1648]: E0127 12:35:46.386967    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:47 multinode-659000 kubelet[1648]: E0127 12:35:47.355582    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:48 multinode-659000 kubelet[1648]: E0127 12:35:48.356308    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:49 multinode-659000 kubelet[1648]: E0127 12:35:49.356027    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:49 multinode-659000 kubelet[1648]: E0127 12:35:49.925365    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:49 multinode-659000 kubelet[1648]: E0127 12:35:49.925459    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:57.925443152 +0000 UTC m=+21.830117808 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:50 multinode-659000 kubelet[1648]: E0127 12:35:50.027100    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:50 multinode-659000 kubelet[1648]: E0127 12:35:50.027219    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:50 multinode-659000 kubelet[1648]: E0127 12:35:50.027346    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:58.027289813 +0000 UTC m=+21.931964469 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:50 multinode-659000 kubelet[1648]: E0127 12:35:50.355319    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:51 multinode-659000 kubelet[1648]: E0127 12:35:51.356503    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:51 multinode-659000 kubelet[1648]: E0127 12:35:51.388594    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:52 multinode-659000 kubelet[1648]: E0127 12:35:52.357390    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:53 multinode-659000 kubelet[1648]: E0127 12:35:53.355568    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:54 multinode-659000 kubelet[1648]: E0127 12:35:54.355531    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:55 multinode-659000 kubelet[1648]: E0127 12:35:55.356228    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:56 multinode-659000 kubelet[1648]: E0127 12:35:56.355726    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:56 multinode-659000 kubelet[1648]: E0127 12:35:56.392446    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:50.615982    9948 command_runner.go:130] > Jan 27 12:35:57 multinode-659000 kubelet[1648]: E0127 12:35:57.355790    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.616565    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.001233    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:50.616565    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.001401    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:36:14.001383565 +0000 UTC m=+37.906058121 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:50.616565    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.101493    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.616565    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.101659    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.616758    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.101748    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:36:14.101732786 +0000 UTC m=+38.006407342 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.616758    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.365026    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.616758    9948 command_runner.go:130] > Jan 27 12:35:59 multinode-659000 kubelet[1648]: E0127 12:35:59.356031    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.616758    9948 command_runner.go:130] > Jan 27 12:36:00 multinode-659000 kubelet[1648]: E0127 12:36:00.356282    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.616758    9948 command_runner.go:130] > Jan 27 12:36:01 multinode-659000 kubelet[1648]: E0127 12:36:01.356209    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.616758    9948 command_runner.go:130] > Jan 27 12:36:01 multinode-659000 kubelet[1648]: E0127 12:36:01.394292    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:50.616758    9948 command_runner.go:130] > Jan 27 12:36:02 multinode-659000 kubelet[1648]: E0127 12:36:02.355777    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.616758    9948 command_runner.go:130] > Jan 27 12:36:03 multinode-659000 kubelet[1648]: E0127 12:36:03.356166    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.616758    9948 command_runner.go:130] > Jan 27 12:36:04 multinode-659000 kubelet[1648]: E0127 12:36:04.356089    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.616758    9948 command_runner.go:130] > Jan 27 12:36:05 multinode-659000 kubelet[1648]: E0127 12:36:05.355458    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.616758    9948 command_runner.go:130] > Jan 27 12:36:06 multinode-659000 kubelet[1648]: E0127 12:36:06.356120    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.616758    9948 command_runner.go:130] > Jan 27 12:36:06 multinode-659000 kubelet[1648]: E0127 12:36:06.396811    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:50.616758    9948 command_runner.go:130] > Jan 27 12:36:07 multinode-659000 kubelet[1648]: E0127 12:36:07.355573    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.616758    9948 command_runner.go:130] > Jan 27 12:36:08 multinode-659000 kubelet[1648]: E0127 12:36:08.355837    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.617339    9948 command_runner.go:130] > Jan 27 12:36:09 multinode-659000 kubelet[1648]: E0127 12:36:09.355284    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.617339    9948 command_runner.go:130] > Jan 27 12:36:10 multinode-659000 kubelet[1648]: E0127 12:36:10.356199    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.617339    9948 command_runner.go:130] > Jan 27 12:36:11 multinode-659000 kubelet[1648]: E0127 12:36:11.356023    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.617339    9948 command_runner.go:130] > Jan 27 12:36:11 multinode-659000 kubelet[1648]: E0127 12:36:11.398054    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:50.617507    9948 command_runner.go:130] > Jan 27 12:36:12 multinode-659000 kubelet[1648]: E0127 12:36:12.355492    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.617539    9948 command_runner.go:130] > Jan 27 12:36:13 multinode-659000 kubelet[1648]: E0127 12:36:13.356291    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.617588    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.058689    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.058911    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:36:46.058858304 +0000 UTC m=+69.963532860 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.159091    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.159277    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.159495    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:36:46.15947175 +0000 UTC m=+70.064146406 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.357000    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:15 multinode-659000 kubelet[1648]: I0127 12:36:15.031682    1648 scope.go:117] "RemoveContainer" containerID="134620caeeb93fda5b32a71962e13d1994830a35b93b18ad2387296500dff7b5"
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:15 multinode-659000 kubelet[1648]: I0127 12:36:15.032024    1648 scope.go:117] "RemoveContainer" containerID="9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f"
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:15 multinode-659000 kubelet[1648]: E0127 12:36:15.032236    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bcfd7913-1bc0-4c24-882f-2be92ec9b046)\"" pod="kube-system/storage-provisioner" podUID="bcfd7913-1bc0-4c24-882f-2be92ec9b046"
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:15 multinode-659000 kubelet[1648]: E0127 12:36:15.355738    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:16 multinode-659000 kubelet[1648]: E0127 12:36:16.356191    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:16 multinode-659000 kubelet[1648]: E0127 12:36:16.399212    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:17 multinode-659000 kubelet[1648]: E0127 12:36:17.355082    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:18 multinode-659000 kubelet[1648]: E0127 12:36:18.356067    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:19 multinode-659000 kubelet[1648]: E0127 12:36:19.355675    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:20 multinode-659000 kubelet[1648]: E0127 12:36:20.356455    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:21 multinode-659000 kubelet[1648]: E0127 12:36:21.355971    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:21 multinode-659000 kubelet[1648]: E0127 12:36:21.401078    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:22 multinode-659000 kubelet[1648]: E0127 12:36:22.355954    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.617619    9948 command_runner.go:130] > Jan 27 12:36:23 multinode-659000 kubelet[1648]: E0127 12:36:23.355387    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.618208    9948 command_runner.go:130] > Jan 27 12:36:24 multinode-659000 kubelet[1648]: E0127 12:36:24.355437    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.618208    9948 command_runner.go:130] > Jan 27 12:36:25 multinode-659000 kubelet[1648]: E0127 12:36:25.356289    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.618208    9948 command_runner.go:130] > Jan 27 12:36:26 multinode-659000 kubelet[1648]: E0127 12:36:26.356493    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.618208    9948 command_runner.go:130] > Jan 27 12:36:26 multinode-659000 kubelet[1648]: E0127 12:36:26.402364    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:50.618401    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 kubelet[1648]: E0127 12:36:27.356407    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.618401    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 kubelet[1648]: I0127 12:36:27.357050    1648 scope.go:117] "RemoveContainer" containerID="9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f"
	I0127 12:36:50.618451    9948 command_runner.go:130] > Jan 27 12:36:28 multinode-659000 kubelet[1648]: E0127 12:36:28.356371    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.618481    9948 command_runner.go:130] > Jan 27 12:36:29 multinode-659000 kubelet[1648]: E0127 12:36:29.355555    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.618529    9948 command_runner.go:130] > Jan 27 12:36:30 multinode-659000 kubelet[1648]: E0127 12:36:30.356227    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:50.618560    9948 command_runner.go:130] > Jan 27 12:36:31 multinode-659000 kubelet[1648]: E0127 12:36:31.356043    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:50.618607    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]: I0127 12:36:36.363314    1648 scope.go:117] "RemoveContainer" containerID="5f274e5a8851d2aeb5403952c3fba0274fe53614e2e0995d1046693d7e725d5d"
	I0127 12:36:50.618652    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]: E0127 12:36:36.393311    1648 iptables.go:577] "Could not set up iptables canary" err=<
	I0127 12:36:50.618652    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0127 12:36:50.618699    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0127 12:36:50.618728    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0127 12:36:50.618728    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0127 12:36:50.618728    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]: I0127 12:36:36.409087    1648 scope.go:117] "RemoveContainer" containerID="f91e9c2d3ba64a6d34c9bab7c1953b46f4006e0bb493bd1ae993c489cd76e02c"
	I0127 12:36:50.663168    9948 logs.go:123] Gathering logs for dmesg ...
	I0127 12:36:50.663168    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:36:50.687151    9948 command_runner.go:130] > [Jan27 12:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0127 12:36:50.687151    9948 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0127 12:36:50.687151    9948 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0127 12:36:50.687151    9948 command_runner.go:130] > [  +0.124628] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0127 12:36:50.687151    9948 command_runner.go:130] > [  +0.022511] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0127 12:36:50.687347    9948 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0127 12:36:50.687361    9948 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0127 12:36:50.687424    9948 command_runner.go:130] > [  +0.069272] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0127 12:36:50.687424    9948 command_runner.go:130] > [  +0.020914] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0127 12:36:50.687464    9948 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0127 12:36:50.687464    9948 command_runner.go:130] > [Jan27 12:34] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0127 12:36:50.687464    9948 command_runner.go:130] > [  +0.706235] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0127 12:36:50.687464    9948 command_runner.go:130] > [  +1.791193] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0127 12:36:50.687464    9948 command_runner.go:130] > [  +6.780102] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0127 12:36:50.687561    9948 command_runner.go:130] > [  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0127 12:36:50.687561    9948 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0127 12:36:50.687561    9948 command_runner.go:130] > [Jan27 12:35] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	I0127 12:36:50.687561    9948 command_runner.go:130] > [  +0.194598] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	I0127 12:36:50.687561    9948 command_runner.go:130] > [ +25.881577] systemd-fstab-generator[1029]: Ignoring "noauto" option for root device
	I0127 12:36:50.687655    9948 command_runner.go:130] > [  +0.104839] kauditd_printk_skb: 75 callbacks suppressed
	I0127 12:36:50.687655    9948 command_runner.go:130] > [  +0.497850] systemd-fstab-generator[1069]: Ignoring "noauto" option for root device
	I0127 12:36:50.687655    9948 command_runner.go:130] > [  +0.189754] systemd-fstab-generator[1081]: Ignoring "noauto" option for root device
	I0127 12:36:50.687655    9948 command_runner.go:130] > [  +0.209865] systemd-fstab-generator[1095]: Ignoring "noauto" option for root device
	I0127 12:36:50.687655    9948 command_runner.go:130] > [  +2.995294] systemd-fstab-generator[1337]: Ignoring "noauto" option for root device
	I0127 12:36:50.687655    9948 command_runner.go:130] > [  +0.193187] systemd-fstab-generator[1349]: Ignoring "noauto" option for root device
	I0127 12:36:50.687727    9948 command_runner.go:130] > [  +0.167597] systemd-fstab-generator[1361]: Ignoring "noauto" option for root device
	I0127 12:36:50.687727    9948 command_runner.go:130] > [  +0.247752] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	I0127 12:36:50.687727    9948 command_runner.go:130] > [  +0.858687] systemd-fstab-generator[1500]: Ignoring "noauto" option for root device
	I0127 12:36:50.687727    9948 command_runner.go:130] > [  +0.090112] kauditd_printk_skb: 206 callbacks suppressed
	I0127 12:36:50.687727    9948 command_runner.go:130] > [  +3.380441] systemd-fstab-generator[1641]: Ignoring "noauto" option for root device
	I0127 12:36:50.687727    9948 command_runner.go:130] > [  +1.786352] kauditd_printk_skb: 64 callbacks suppressed
	I0127 12:36:50.687727    9948 command_runner.go:130] > [  +5.236723] kauditd_printk_skb: 10 callbacks suppressed
	I0127 12:36:50.687802    9948 command_runner.go:130] > [  +4.105586] systemd-fstab-generator[2522]: Ignoring "noauto" option for root device
	I0127 12:36:50.687802    9948 command_runner.go:130] > [Jan27 12:36] kauditd_printk_skb: 70 callbacks suppressed
	I0127 12:36:50.689662    9948 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:36:50.689662    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 12:36:50.977343    9948 command_runner.go:130] > Name:               multinode-659000
	I0127 12:36:50.977343    9948 command_runner.go:130] > Roles:              control-plane
	I0127 12:36:50.977343    9948 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0127 12:36:50.977343    9948 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0127 12:36:50.977343    9948 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0127 12:36:50.977343    9948 command_runner.go:130] >                     kubernetes.io/hostname=multinode-659000
	I0127 12:36:50.977343    9948 command_runner.go:130] >                     kubernetes.io/os=linux
	I0127 12:36:50.977343    9948 command_runner.go:130] >                     minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	I0127 12:36:50.977343    9948 command_runner.go:130] >                     minikube.k8s.io/name=multinode-659000
	I0127 12:36:50.977343    9948 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0127 12:36:50.977504    9948 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_01_27T12_12_00_0700
	I0127 12:36:50.977534    9948 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0127 12:36:50.977534    9948 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0127 12:36:50.977534    9948 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0127 12:36:50.977534    9948 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0127 12:36:50.977534    9948 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0127 12:36:50.977626    9948 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0127 12:36:50.977645    9948 command_runner.go:130] > CreationTimestamp:  Mon, 27 Jan 2025 12:11:55 +0000
	I0127 12:36:50.977645    9948 command_runner.go:130] > Taints:             <none>
	I0127 12:36:50.977645    9948 command_runner.go:130] > Unschedulable:      false
	I0127 12:36:50.977645    9948 command_runner.go:130] > Lease:
	I0127 12:36:50.977645    9948 command_runner.go:130] >   HolderIdentity:  multinode-659000
	I0127 12:36:50.977645    9948 command_runner.go:130] >   AcquireTime:     <unset>
	I0127 12:36:50.977645    9948 command_runner.go:130] >   RenewTime:       Mon, 27 Jan 2025 12:36:42 +0000
	I0127 12:36:50.977703    9948 command_runner.go:130] > Conditions:
	I0127 12:36:50.977788    9948 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0127 12:36:50.977788    9948 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0127 12:36:50.977788    9948 command_runner.go:130] >   MemoryPressure   False   Mon, 27 Jan 2025 12:36:32 +0000   Mon, 27 Jan 2025 12:11:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0127 12:36:50.977788    9948 command_runner.go:130] >   DiskPressure     False   Mon, 27 Jan 2025 12:36:32 +0000   Mon, 27 Jan 2025 12:11:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0127 12:36:50.977788    9948 command_runner.go:130] >   PIDPressure      False   Mon, 27 Jan 2025 12:36:32 +0000   Mon, 27 Jan 2025 12:11:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0127 12:36:50.977788    9948 command_runner.go:130] >   Ready            True    Mon, 27 Jan 2025 12:36:32 +0000   Mon, 27 Jan 2025 12:36:32 +0000   KubeletReady                 kubelet is posting ready status
	I0127 12:36:50.977788    9948 command_runner.go:130] > Addresses:
	I0127 12:36:50.977788    9948 command_runner.go:130] >   InternalIP:  172.29.198.106
	I0127 12:36:50.977788    9948 command_runner.go:130] >   Hostname:    multinode-659000
	I0127 12:36:50.977788    9948 command_runner.go:130] > Capacity:
	I0127 12:36:50.977788    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:50.977788    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:50.977788    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:50.977788    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:50.977788    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:50.977788    9948 command_runner.go:130] > Allocatable:
	I0127 12:36:50.977788    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:50.978341    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:50.978341    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:50.978341    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:50.978341    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:50.978341    9948 command_runner.go:130] > System Info:
	I0127 12:36:50.978341    9948 command_runner.go:130] >   Machine ID:                 312902fc96b948148d51eecf097c4a9d
	I0127 12:36:50.978341    9948 command_runner.go:130] >   System UUID:                be6234aa-9e29-bb41-8165-59b265a4d7d0
	I0127 12:36:50.978341    9948 command_runner.go:130] >   Boot ID:                    058425a5-0652-4c5c-a517-2369b8cac13d
	I0127 12:36:50.978453    9948 command_runner.go:130] >   Kernel Version:             5.10.207
	I0127 12:36:50.978453    9948 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0127 12:36:50.978491    9948 command_runner.go:130] >   Operating System:           linux
	I0127 12:36:50.978491    9948 command_runner.go:130] >   Architecture:               amd64
	I0127 12:36:50.978491    9948 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0127 12:36:50.978542    9948 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0127 12:36:50.978542    9948 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0127 12:36:50.978570    9948 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0127 12:36:50.978600    9948 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0127 12:36:50.978600    9948 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0127 12:36:50.978600    9948 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0127 12:36:50.978639    9948 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0127 12:36:50.978639    9948 command_runner.go:130] >   default                     busybox-58667487b6-2jq9j                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0127 12:36:50.978683    9948 command_runner.go:130] >   kube-system                 coredns-668d6bf9bc-2qw6w                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     24m
	I0127 12:36:50.978683    9948 command_runner.go:130] >   kube-system                 etcd-multinode-659000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         68s
	I0127 12:36:50.978683    9948 command_runner.go:130] >   kube-system                 kindnet-z2hqq                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	I0127 12:36:50.978760    9948 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-659000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         68s
	I0127 12:36:50.978788    9948 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-659000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0127 12:36:50.978788    9948 command_runner.go:130] >   kube-system                 kube-proxy-s46mv                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0127 12:36:50.978871    9948 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-659000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0127 12:36:50.978871    9948 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0127 12:36:50.978871    9948 command_runner.go:130] > Allocated resources:
	I0127 12:36:50.978871    9948 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0127 12:36:50.978871    9948 command_runner.go:130] >   Resource           Requests     Limits
	I0127 12:36:50.978871    9948 command_runner.go:130] >   --------           --------     ------
	I0127 12:36:50.978871    9948 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0127 12:36:50.978931    9948 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0127 12:36:50.978931    9948 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0127 12:36:50.978998    9948 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0127 12:36:50.979037    9948 command_runner.go:130] > Events:
	I0127 12:36:50.979072    9948 command_runner.go:130] >   Type     Reason                   Age                From             Message
	I0127 12:36:50.979105    9948 command_runner.go:130] >   ----     ------                   ----               ----             -------
	I0127 12:36:50.979105    9948 command_runner.go:130] >   Normal   Starting                 24m                kube-proxy       
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   Starting                 65s                kube-proxy       
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   Starting                 24m                kubelet          Starting kubelet.
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node multinode-659000 status is now: NodeHasSufficientMemory
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node multinode-659000 status is now: NodeHasNoDiskPressure
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   NodeHasSufficientPID     24m (x7 over 24m)  kubelet          Node multinode-659000 status is now: NodeHasSufficientPID
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    24m                kubelet          Node multinode-659000 status is now: NodeHasNoDiskPressure
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   NodeHasSufficientMemory  24m                kubelet          Node multinode-659000 status is now: NodeHasSufficientMemory
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   NodeHasSufficientPID     24m                kubelet          Node multinode-659000 status is now: NodeHasSufficientPID
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   Starting                 24m                kubelet          Starting kubelet.
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   RegisteredNode           24m                node-controller  Node multinode-659000 event: Registered Node multinode-659000 in Controller
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   NodeReady                24m                kubelet          Node multinode-659000 status is now: NodeReady
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   Starting                 74s                kubelet          Starting kubelet.
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   NodeHasSufficientMemory  74s (x8 over 74s)  kubelet          Node multinode-659000 status is now: NodeHasSufficientMemory
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    74s (x8 over 74s)  kubelet          Node multinode-659000 status is now: NodeHasNoDiskPressure
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   NodeHasSufficientPID     74s (x7 over 74s)  kubelet          Node multinode-659000 status is now: NodeHasSufficientPID
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Warning  Rebooted                 69s                kubelet          Node multinode-659000 has been rebooted, boot id: 058425a5-0652-4c5c-a517-2369b8cac13d
	I0127 12:36:50.979132    9948 command_runner.go:130] >   Normal   RegisteredNode           66s                node-controller  Node multinode-659000 event: Registered Node multinode-659000 in Controller
	I0127 12:36:50.979132    9948 command_runner.go:130] > Name:               multinode-659000-m02
	I0127 12:36:50.979132    9948 command_runner.go:130] > Roles:              <none>
	I0127 12:36:50.979132    9948 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0127 12:36:50.979132    9948 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0127 12:36:50.979132    9948 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0127 12:36:50.979132    9948 command_runner.go:130] >                     kubernetes.io/hostname=multinode-659000-m02
	I0127 12:36:50.979132    9948 command_runner.go:130] >                     kubernetes.io/os=linux
	I0127 12:36:50.979132    9948 command_runner.go:130] >                     minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	I0127 12:36:50.979132    9948 command_runner.go:130] >                     minikube.k8s.io/name=multinode-659000
	I0127 12:36:50.979132    9948 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0127 12:36:50.979132    9948 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_01_27T12_15_08_0700
	I0127 12:36:50.979132    9948 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0127 12:36:50.979657    9948 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0127 12:36:50.979657    9948 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0127 12:36:50.979657    9948 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0127 12:36:50.979713    9948 command_runner.go:130] > CreationTimestamp:  Mon, 27 Jan 2025 12:15:07 +0000
	I0127 12:36:50.979713    9948 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0127 12:36:50.979713    9948 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0127 12:36:50.979713    9948 command_runner.go:130] > Unschedulable:      false
	I0127 12:36:50.979713    9948 command_runner.go:130] > Lease:
	I0127 12:36:50.979713    9948 command_runner.go:130] >   HolderIdentity:  multinode-659000-m02
	I0127 12:36:50.979713    9948 command_runner.go:130] >   AcquireTime:     <unset>
	I0127 12:36:50.979713    9948 command_runner.go:130] >   RenewTime:       Mon, 27 Jan 2025 12:32:39 +0000
	I0127 12:36:50.979814    9948 command_runner.go:130] > Conditions:
	I0127 12:36:50.979814    9948 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0127 12:36:50.979814    9948 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0127 12:36:50.979814    9948 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 27 Jan 2025 12:28:32 +0000   Mon, 27 Jan 2025 12:36:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:50.979874    9948 command_runner.go:130] >   DiskPressure     Unknown   Mon, 27 Jan 2025 12:28:32 +0000   Mon, 27 Jan 2025 12:36:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:50.979897    9948 command_runner.go:130] >   PIDPressure      Unknown   Mon, 27 Jan 2025 12:28:32 +0000   Mon, 27 Jan 2025 12:36:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Ready            Unknown   Mon, 27 Jan 2025 12:28:32 +0000   Mon, 27 Jan 2025 12:36:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:50.979918    9948 command_runner.go:130] > Addresses:
	I0127 12:36:50.979918    9948 command_runner.go:130] >   InternalIP:  172.29.199.129
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Hostname:    multinode-659000-m02
	I0127 12:36:50.979918    9948 command_runner.go:130] > Capacity:
	I0127 12:36:50.979918    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:50.979918    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:50.979918    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:50.979918    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:50.979918    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:50.979918    9948 command_runner.go:130] > Allocatable:
	I0127 12:36:50.979918    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:50.979918    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:50.979918    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:50.979918    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:50.979918    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:50.979918    9948 command_runner.go:130] > System Info:
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Machine ID:                 30ce15ff72904b54b07c49f3e2f28802
	I0127 12:36:50.979918    9948 command_runner.go:130] >   System UUID:                b6923799-fa1e-b54c-9340-50dd6a2378f5
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Boot ID:                    3308d183-ec79-4aeb-9d90-80d47cdbff63
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Kernel Version:             5.10.207
	I0127 12:36:50.979918    9948 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Operating System:           linux
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Architecture:               amd64
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0127 12:36:50.979918    9948 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0127 12:36:50.979918    9948 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0127 12:36:50.979918    9948 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0127 12:36:50.979918    9948 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0127 12:36:50.979918    9948 command_runner.go:130] >   default                     busybox-58667487b6-ktfxc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0127 12:36:50.979918    9948 command_runner.go:130] >   kube-system                 kindnet-n7vjl               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	I0127 12:36:50.979918    9948 command_runner.go:130] >   kube-system                 kube-proxy-pjhc8            0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0127 12:36:50.979918    9948 command_runner.go:130] > Allocated resources:
	I0127 12:36:50.979918    9948 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Resource           Requests   Limits
	I0127 12:36:50.979918    9948 command_runner.go:130] >   --------           --------   ------
	I0127 12:36:50.979918    9948 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0127 12:36:50.979918    9948 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0127 12:36:50.979918    9948 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0127 12:36:50.979918    9948 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0127 12:36:50.979918    9948 command_runner.go:130] > Events:
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0127 12:36:50.979918    9948 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-659000-m02 status is now: NodeHasSufficientMemory
	I0127 12:36:50.979918    9948 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-659000-m02 status is now: NodeHasNoDiskPressure
	I0127 12:36:50.980451    9948 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-659000-m02 status is now: NodeHasSufficientPID
	I0127 12:36:50.980451    9948 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:50.980451    9948 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-659000-m02 event: Registered Node multinode-659000-m02 in Controller
	I0127 12:36:50.980610    9948 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-659000-m02 status is now: NodeReady
	I0127 12:36:50.980610    9948 command_runner.go:130] >   Normal  RegisteredNode           66s                node-controller  Node multinode-659000-m02 event: Registered Node multinode-659000-m02 in Controller
	I0127 12:36:50.980610    9948 command_runner.go:130] >   Normal  NodeNotReady             16s                node-controller  Node multinode-659000-m02 status is now: NodeNotReady
	I0127 12:36:50.980610    9948 command_runner.go:130] > Name:               multinode-659000-m03
	I0127 12:36:50.980610    9948 command_runner.go:130] > Roles:              <none>
	I0127 12:36:50.980610    9948 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0127 12:36:50.980610    9948 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0127 12:36:50.980610    9948 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0127 12:36:50.980610    9948 command_runner.go:130] >                     kubernetes.io/hostname=multinode-659000-m03
	I0127 12:36:50.980610    9948 command_runner.go:130] >                     kubernetes.io/os=linux
	I0127 12:36:50.980610    9948 command_runner.go:130] >                     minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	I0127 12:36:50.980610    9948 command_runner.go:130] >                     minikube.k8s.io/name=multinode-659000
	I0127 12:36:50.980610    9948 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0127 12:36:50.980610    9948 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_01_27T12_31_04_0700
	I0127 12:36:50.980610    9948 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0127 12:36:50.980610    9948 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0127 12:36:50.980610    9948 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0127 12:36:50.980610    9948 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0127 12:36:50.980610    9948 command_runner.go:130] > CreationTimestamp:  Mon, 27 Jan 2025 12:31:04 +0000
	I0127 12:36:50.980610    9948 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0127 12:36:50.980610    9948 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0127 12:36:50.980610    9948 command_runner.go:130] > Unschedulable:      false
	I0127 12:36:50.980610    9948 command_runner.go:130] > Lease:
	I0127 12:36:50.980610    9948 command_runner.go:130] >   HolderIdentity:  multinode-659000-m03
	I0127 12:36:50.980610    9948 command_runner.go:130] >   AcquireTime:     <unset>
	I0127 12:36:50.980610    9948 command_runner.go:130] >   RenewTime:       Mon, 27 Jan 2025 12:32:15 +0000
	I0127 12:36:50.980610    9948 command_runner.go:130] > Conditions:
	I0127 12:36:50.980610    9948 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0127 12:36:50.980610    9948 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0127 12:36:50.980610    9948 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 27 Jan 2025 12:31:22 +0000   Mon, 27 Jan 2025 12:33:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:50.980610    9948 command_runner.go:130] >   DiskPressure     Unknown   Mon, 27 Jan 2025 12:31:22 +0000   Mon, 27 Jan 2025 12:33:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:50.980610    9948 command_runner.go:130] >   PIDPressure      Unknown   Mon, 27 Jan 2025 12:31:22 +0000   Mon, 27 Jan 2025 12:33:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:50.980610    9948 command_runner.go:130] >   Ready            Unknown   Mon, 27 Jan 2025 12:31:22 +0000   Mon, 27 Jan 2025 12:33:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:50.980610    9948 command_runner.go:130] > Addresses:
	I0127 12:36:50.980610    9948 command_runner.go:130] >   InternalIP:  172.29.206.88
	I0127 12:36:50.980610    9948 command_runner.go:130] >   Hostname:    multinode-659000-m03
	I0127 12:36:50.980610    9948 command_runner.go:130] > Capacity:
	I0127 12:36:50.980610    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:50.980610    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:50.980610    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:50.980610    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:50.981195    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:50.981195    9948 command_runner.go:130] > Allocatable:
	I0127 12:36:50.981195    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:50.981195    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:50.981255    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:50.981255    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:50.981255    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:50.981255    9948 command_runner.go:130] > System Info:
	I0127 12:36:50.981255    9948 command_runner.go:130] >   Machine ID:                 5cd7b7bdbad940e0831e949f70fdd5af
	I0127 12:36:50.981255    9948 command_runner.go:130] >   System UUID:                bab0a90b-9ed8-ba42-88b9-fc6568ad7a53
	I0127 12:36:50.981255    9948 command_runner.go:130] >   Boot ID:                    9d0d04c8-71ef-487a-a13c-e1de6463b3fe
	I0127 12:36:50.981255    9948 command_runner.go:130] >   Kernel Version:             5.10.207
	I0127 12:36:50.981347    9948 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0127 12:36:50.981347    9948 command_runner.go:130] >   Operating System:           linux
	I0127 12:36:50.981347    9948 command_runner.go:130] >   Architecture:               amd64
	I0127 12:36:50.981347    9948 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0127 12:36:50.981347    9948 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0127 12:36:50.981347    9948 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0127 12:36:50.981347    9948 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0127 12:36:50.981407    9948 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0127 12:36:50.981407    9948 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0127 12:36:50.981407    9948 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0127 12:36:50.981407    9948 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0127 12:36:50.981493    9948 command_runner.go:130] >   kube-system                 kindnet-kpfjt       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      16m
	I0127 12:36:50.981551    9948 command_runner.go:130] >   kube-system                 kube-proxy-sk5js    0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	I0127 12:36:50.981573    9948 command_runner.go:130] > Allocated resources:
	I0127 12:36:50.981573    9948 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0127 12:36:50.981573    9948 command_runner.go:130] >   Resource           Requests   Limits
	I0127 12:36:50.981573    9948 command_runner.go:130] >   --------           --------   ------
	I0127 12:36:50.981573    9948 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0127 12:36:50.981573    9948 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0127 12:36:50.981573    9948 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0127 12:36:50.981573    9948 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0127 12:36:50.981648    9948 command_runner.go:130] > Events:
	I0127 12:36:50.981648    9948 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0127 12:36:50.981648    9948 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0127 12:36:50.981648    9948 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0127 12:36:50.981648    9948 command_runner.go:130] >   Normal  Starting                 5m43s                  kube-proxy       
	I0127 12:36:50.981708    9948 command_runner.go:130] >   Normal  NodeHasSufficientMemory  16m (x2 over 16m)      kubelet          Node multinode-659000-m03 status is now: NodeHasSufficientMemory
	I0127 12:36:50.981708    9948 command_runner.go:130] >   Normal  NodeHasSufficientPID     16m (x2 over 16m)      kubelet          Node multinode-659000-m03 status is now: NodeHasSufficientPID
	I0127 12:36:50.981708    9948 command_runner.go:130] >   Normal  NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:50.981801    9948 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    16m (x2 over 16m)      kubelet          Node multinode-659000-m03 status is now: NodeHasNoDiskPressure
	I0127 12:36:50.981801    9948 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-659000-m03 status is now: NodeReady
	I0127 12:36:50.981801    9948 command_runner.go:130] >   Normal  Starting                 5m47s                  kubelet          Starting kubelet.
	I0127 12:36:50.981801    9948 command_runner.go:130] >   Normal  CIDRAssignmentFailed     5m46s                  cidrAllocator    Node multinode-659000-m03 status is now: CIDRAssignmentFailed
	I0127 12:36:50.981801    9948 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m46s (x2 over 5m46s)  kubelet          Node multinode-659000-m03 status is now: NodeHasSufficientMemory
	I0127 12:36:50.981801    9948 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m46s (x2 over 5m46s)  kubelet          Node multinode-659000-m03 status is now: NodeHasNoDiskPressure
	I0127 12:36:50.981801    9948 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m46s (x2 over 5m46s)  kubelet          Node multinode-659000-m03 status is now: NodeHasSufficientPID
	I0127 12:36:50.981988    9948 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m46s                  kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:50.981988    9948 command_runner.go:130] >   Normal  RegisteredNode           5m42s                  node-controller  Node multinode-659000-m03 event: Registered Node multinode-659000-m03 in Controller
	I0127 12:36:50.981988    9948 command_runner.go:130] >   Normal  NodeReady                5m28s                  kubelet          Node multinode-659000-m03 status is now: NodeReady
	I0127 12:36:50.981988    9948 command_runner.go:130] >   Normal  NodeNotReady             3m42s                  node-controller  Node multinode-659000-m03 status is now: NodeNotReady
	I0127 12:36:50.982054    9948 command_runner.go:130] >   Normal  RegisteredNode           66s                    node-controller  Node multinode-659000-m03 event: Registered Node multinode-659000-m03 in Controller
	I0127 12:36:50.991768    9948 logs.go:123] Gathering logs for coredns [f818dd15d8b0] ...
	I0127 12:36:50.991768    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f818dd15d8b0"
	I0127 12:36:51.024368    9948 command_runner.go:130] > .:53
	I0127 12:36:51.024368    9948 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 5e2e325279dfa828a8fd1b44d83ab4703abb0247d4beadde42157147650fe687c0862eaa4caa15a5d9139c48c9a9dd5ec3cd962ba60368e8ffb4d02ae4d29aeb
	I0127 12:36:51.024426    9948 command_runner.go:130] > CoreDNS-1.11.3
	I0127 12:36:51.024426    9948 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0127 12:36:51.024426    9948 command_runner.go:130] > [INFO] 127.0.0.1:50782 - 35950 "HINFO IN 8787717511470146079.8254135695837817311. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.151481959s
	I0127 12:36:51.024426    9948 command_runner.go:130] > [INFO] 10.244.0.3:56186 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000430505s
	I0127 12:36:51.024480    9948 command_runner.go:130] > [INFO] 10.244.0.3:58756 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.126738988s
	I0127 12:36:51.024504    9948 command_runner.go:130] > [INFO] 10.244.0.3:36399 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.053330342s
	I0127 12:36:51.024504    9948 command_runner.go:130] > [INFO] 10.244.0.3:35359 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.140941591s
	I0127 12:36:51.024504    9948 command_runner.go:130] > [INFO] 10.244.1.2:41150 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000220803s
	I0127 12:36:51.024565    9948 command_runner.go:130] > [INFO] 10.244.1.2:57591 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0000709s
	I0127 12:36:51.024565    9948 command_runner.go:130] > [INFO] 10.244.1.2:45132 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000133002s
	I0127 12:36:51.024565    9948 command_runner.go:130] > [INFO] 10.244.1.2:48593 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000728s
	I0127 12:36:51.024565    9948 command_runner.go:130] > [INFO] 10.244.0.3:53274 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000261802s
	I0127 12:36:51.024641    9948 command_runner.go:130] > [INFO] 10.244.0.3:57676 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.069110701s
	I0127 12:36:51.024641    9948 command_runner.go:130] > [INFO] 10.244.0.3:59948 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000178302s
	I0127 12:36:51.024668    9948 command_runner.go:130] > [INFO] 10.244.0.3:39801 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000198802s
	I0127 12:36:51.024710    9948 command_runner.go:130] > [INFO] 10.244.0.3:45673 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.023238636s
	I0127 12:36:51.024730    9948 command_runner.go:130] > [INFO] 10.244.0.3:42840 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154002s
	I0127 12:36:51.024730    9948 command_runner.go:130] > [INFO] 10.244.0.3:43505 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000181002s
	I0127 12:36:51.024730    9948 command_runner.go:130] > [INFO] 10.244.0.3:34935 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092101s
	I0127 12:36:51.024821    9948 command_runner.go:130] > [INFO] 10.244.1.2:54822 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155102s
	I0127 12:36:51.024821    9948 command_runner.go:130] > [INFO] 10.244.1.2:50877 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000188102s
	I0127 12:36:51.024846    9948 command_runner.go:130] > [INFO] 10.244.1.2:45384 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183802s
	I0127 12:36:51.024881    9948 command_runner.go:130] > [INFO] 10.244.1.2:35073 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000227202s
	I0127 12:36:51.024881    9948 command_runner.go:130] > [INFO] 10.244.1.2:50517 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000061101s
	I0127 12:36:51.024881    9948 command_runner.go:130] > [INFO] 10.244.1.2:37353 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130501s
	I0127 12:36:51.024933    9948 command_runner.go:130] > [INFO] 10.244.1.2:42117 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114301s
	I0127 12:36:51.024954    9948 command_runner.go:130] > [INFO] 10.244.1.2:46171 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060401s
	I0127 12:36:51.024979    9948 command_runner.go:130] > [INFO] 10.244.0.3:55282 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117601s
	I0127 12:36:51.024979    9948 command_runner.go:130] > [INFO] 10.244.0.3:41761 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162301s
	I0127 12:36:51.025011    9948 command_runner.go:130] > [INFO] 10.244.0.3:35358 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000218902s
	I0127 12:36:51.025011    9948 command_runner.go:130] > [INFO] 10.244.0.3:50342 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124402s
	I0127 12:36:51.025045    9948 command_runner.go:130] > [INFO] 10.244.1.2:38159 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159602s
	I0127 12:36:51.025045    9948 command_runner.go:130] > [INFO] 10.244.1.2:37043 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000171002s
	I0127 12:36:51.025078    9948 command_runner.go:130] > [INFO] 10.244.1.2:50762 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000168301s
	I0127 12:36:51.025078    9948 command_runner.go:130] > [INFO] 10.244.1.2:33014 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000603s
	I0127 12:36:51.025129    9948 command_runner.go:130] > [INFO] 10.244.0.3:34941 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134301s
	I0127 12:36:51.025129    9948 command_runner.go:130] > [INFO] 10.244.0.3:60117 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000393904s
	I0127 12:36:51.025174    9948 command_runner.go:130] > [INFO] 10.244.0.3:47506 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000214402s
	I0127 12:36:51.025174    9948 command_runner.go:130] > [INFO] 10.244.0.3:42968 - 5 "PTR IN 1.192.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000443604s
	I0127 12:36:51.025174    9948 command_runner.go:130] > [INFO] 10.244.1.2:52260 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193802s
	I0127 12:36:51.025174    9948 command_runner.go:130] > [INFO] 10.244.1.2:40492 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000310903s
	I0127 12:36:51.025174    9948 command_runner.go:130] > [INFO] 10.244.1.2:50341 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074s
	I0127 12:36:51.025174    9948 command_runner.go:130] > [INFO] 10.244.1.2:41676 - 5 "PTR IN 1.192.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000637s
	I0127 12:36:51.025174    9948 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0127 12:36:51.025174    9948 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0127 12:36:51.027582    9948 logs.go:123] Gathering logs for kube-proxy [0283b35dee3c] ...
	I0127 12:36:51.027582    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0283b35dee3c"
	I0127 12:36:51.061534    9948 command_runner.go:130] ! I0127 12:35:44.449716       1 server_linux.go:66] "Using iptables proxy"
	I0127 12:36:51.061534    9948 command_runner.go:130] ! E0127 12:35:44.569403       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0127 12:36:51.061534    9948 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0127 12:36:51.061637    9948 command_runner.go:130] ! 	add table ip kube-proxy
	I0127 12:36:51.061637    9948 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:51.061637    9948 command_runner.go:130] !  >
	I0127 12:36:51.061637    9948 command_runner.go:130] ! E0127 12:35:44.599245       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0127 12:36:51.061637    9948 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0127 12:36:51.061637    9948 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0127 12:36:51.061637    9948 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:51.061637    9948 command_runner.go:130] !  >
	I0127 12:36:51.061702    9948 command_runner.go:130] ! I0127 12:35:44.767652       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.198.106"]
	I0127 12:36:51.061736    9948 command_runner.go:130] ! E0127 12:35:44.770299       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 12:36:51.061773    9948 command_runner.go:130] ! I0127 12:35:45.038438       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 12:36:51.061773    9948 command_runner.go:130] ! I0127 12:35:45.038556       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 12:36:51.061864    9948 command_runner.go:130] ! I0127 12:35:45.038587       1 server_linux.go:170] "Using iptables Proxier"
	I0127 12:36:51.061864    9948 command_runner.go:130] ! I0127 12:35:45.043111       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 12:36:51.061864    9948 command_runner.go:130] ! I0127 12:35:45.045042       1 server.go:497] "Version info" version="v1.32.1"
	I0127 12:36:51.061966    9948 command_runner.go:130] ! I0127 12:35:45.045375       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:51.061995    9948 command_runner.go:130] ! I0127 12:35:45.053262       1 config.go:199] "Starting service config controller"
	I0127 12:36:51.061995    9948 command_runner.go:130] ! I0127 12:35:45.054808       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 12:36:51.061995    9948 command_runner.go:130] ! I0127 12:35:45.054873       1 config.go:329] "Starting node config controller"
	I0127 12:36:51.061995    9948 command_runner.go:130] ! I0127 12:35:45.054880       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 12:36:51.062237    9948 command_runner.go:130] ! I0127 12:35:45.058308       1 config.go:105] "Starting endpoint slice config controller"
	I0127 12:36:51.062290    9948 command_runner.go:130] ! I0127 12:35:45.058492       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 12:36:51.062317    9948 command_runner.go:130] ! I0127 12:35:45.155116       1 shared_informer.go:320] Caches are synced for node config
	I0127 12:36:51.062317    9948 command_runner.go:130] ! I0127 12:35:45.155116       1 shared_informer.go:320] Caches are synced for service config
	I0127 12:36:51.062317    9948 command_runner.go:130] ! I0127 12:35:45.159566       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 12:36:51.065663    9948 logs.go:123] Gathering logs for kube-controller-manager [8d4872cda28d] ...
	I0127 12:36:51.065663    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4872cda28d"
	I0127 12:36:51.100308    9948 command_runner.go:130] ! I0127 12:35:39.384985       1 serving.go:386] Generated self-signed cert in-memory
	I0127 12:36:51.101314    9948 command_runner.go:130] ! I0127 12:35:39.805936       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0127 12:36:51.101314    9948 command_runner.go:130] ! I0127 12:35:39.811206       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:51.101314    9948 command_runner.go:130] ! I0127 12:35:39.817632       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0127 12:36:51.101399    9948 command_runner.go:130] ! I0127 12:35:39.822579       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:51.101399    9948 command_runner.go:130] ! I0127 12:35:39.822772       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 12:36:51.101399    9948 command_runner.go:130] ! I0127 12:35:39.823033       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:51.101399    9948 command_runner.go:130] ! I0127 12:35:43.406116       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0127 12:36:51.101399    9948 command_runner.go:130] ! I0127 12:35:43.407249       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0127 12:36:51.101462    9948 command_runner.go:130] ! I0127 12:35:43.417237       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0127 12:36:51.101487    9948 command_runner.go:130] ! I0127 12:35:43.417292       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0127 12:36:51.101487    9948 command_runner.go:130] ! I0127 12:35:43.417300       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0127 12:36:51.101487    9948 command_runner.go:130] ! I0127 12:35:43.417307       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0127 12:36:51.101487    9948 command_runner.go:130] ! I0127 12:35:43.417506       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0127 12:36:51.101487    9948 command_runner.go:130] ! I0127 12:35:43.417534       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0127 12:36:51.101487    9948 command_runner.go:130] ! I0127 12:35:43.417553       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0127 12:36:51.101487    9948 command_runner.go:130] ! I0127 12:35:43.431621       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0127 12:36:51.101593    9948 command_runner.go:130] ! I0127 12:35:43.431964       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0127 12:36:51.101593    9948 command_runner.go:130] ! I0127 12:35:43.431989       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0127 12:36:51.101664    9948 command_runner.go:130] ! I0127 12:35:43.432010       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0127 12:36:51.101664    9948 command_runner.go:130] ! I0127 12:35:43.442961       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0127 12:36:51.101711    9948 command_runner.go:130] ! I0127 12:35:43.447308       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0127 12:36:51.101737    9948 command_runner.go:130] ! I0127 12:35:43.447396       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0127 12:36:51.101767    9948 command_runner.go:130] ! I0127 12:35:43.449412       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.449608       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.466583       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.467490       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.467508       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.491988       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.493672       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.493698       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.498557       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.503953       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.503976       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.505729       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.505861       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.505872       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.509718       1 shared_informer.go:320] Caches are synced for tokens
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.510192       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.510208       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.510698       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.510714       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.512896       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.513433       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.513448       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.516433       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.516659       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.516671       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.524334       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.524358       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0127 12:36:51.101793    9948 command_runner.go:130] ! I0127 12:35:43.524545       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0127 12:36:51.102333    9948 command_runner.go:130] ! I0127 12:35:43.524557       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0127 12:36:51.102333    9948 command_runner.go:130] ! I0127 12:35:43.534871       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0127 12:36:51.102333    9948 command_runner.go:130] ! I0127 12:35:43.535028       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0127 12:36:51.102333    9948 command_runner.go:130] ! I0127 12:35:43.535038       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0127 12:36:51.102333    9948 command_runner.go:130] ! I0127 12:35:43.557745       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0127 12:36:51.102333    9948 command_runner.go:130] ! I0127 12:35:43.557975       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0127 12:36:51.102333    9948 command_runner.go:130] ! I0127 12:35:43.612615       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0127 12:36:51.102514    9948 command_runner.go:130] ! I0127 12:35:43.612890       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0127 12:36:51.102539    9948 command_runner.go:130] ! I0127 12:35:43.612906       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0127 12:36:51.102539    9948 command_runner.go:130] ! I0127 12:35:43.616333       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0127 12:36:51.102566    9948 command_runner.go:130] ! I0127 12:35:43.627087       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.627107       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.692864       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.692892       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.693095       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.700796       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.703832       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.703867       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.713912       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.714114       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.714094       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.714712       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.714721       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.721904       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.722372       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.723076       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.739709       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.739886       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.739897       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.748074       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.748419       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.748432       1 shared_informer.go:313] Waiting for caches to sync for job
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.774085       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.774108       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.774196       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0127 12:36:51.102601    9948 command_runner.go:130] ! I0127 12:35:43.814844       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0127 12:36:51.103170    9948 command_runner.go:130] ! I0127 12:35:43.815383       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0127 12:36:51.103170    9948 command_runner.go:130] ! I0127 12:35:43.815410       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0127 12:36:51.103230    9948 command_runner.go:130] ! W0127 12:35:43.815432       1 shared_informer.go:597] resyncPeriod 17h46m45.188948257s is smaller than resyncCheckPeriod 20h1m58.14772951s and the informer has already started. Changing it to 20h1m58.14772951s
	I0127 12:36:51.103230    9948 command_runner.go:130] ! I0127 12:35:43.815487       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0127 12:36:51.103230    9948 command_runner.go:130] ! I0127 12:35:43.815503       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0127 12:36:51.103322    9948 command_runner.go:130] ! I0127 12:35:43.816077       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0127 12:36:51.103348    9948 command_runner.go:130] ! I0127 12:35:43.816613       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0127 12:36:51.103348    9948 command_runner.go:130] ! I0127 12:35:43.817053       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0127 12:36:51.103348    9948 command_runner.go:130] ! I0127 12:35:43.817252       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0127 12:36:51.103414    9948 command_runner.go:130] ! I0127 12:35:43.817373       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0127 12:36:51.103414    9948 command_runner.go:130] ! I0127 12:35:43.817397       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0127 12:36:51.103414    9948 command_runner.go:130] ! W0127 12:35:43.818105       1 shared_informer.go:597] resyncPeriod 12h27m56.377400464s is smaller than resyncCheckPeriod 20h1m58.14772951s and the informer has already started. Changing it to 20h1m58.14772951s
	I0127 12:36:51.103475    9948 command_runner.go:130] ! I0127 12:35:43.818223       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0127 12:36:51.103475    9948 command_runner.go:130] ! I0127 12:35:43.818270       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0127 12:36:51.103475    9948 command_runner.go:130] ! I0127 12:35:43.818295       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0127 12:36:51.103555    9948 command_runner.go:130] ! I0127 12:35:43.818319       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.818336       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.818363       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.818376       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.818392       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.818410       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.818442       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.818764       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.818778       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.819843       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.841955       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.842559       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.842587       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.842995       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.852026       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.852211       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.852253       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.922876       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.923019       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.923033       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.962858       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.962895       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.963021       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:43.963037       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:44.014798       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:44.016438       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:44.016458       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:44.066881       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:44.067018       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0127 12:36:51.103582    9948 command_runner.go:130] ! I0127 12:35:44.067064       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0127 12:36:51.103582    9948 command_runner.go:130] ! W0127 12:35:44.227808       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0127 12:36:51.104119    9948 command_runner.go:130] ! I0127 12:35:44.236233       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0127 12:36:51.104162    9948 command_runner.go:130] ! I0127 12:35:44.236429       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0127 12:36:51.104162    9948 command_runner.go:130] ! I0127 12:35:44.236541       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0127 12:36:51.104162    9948 command_runner.go:130] ! I0127 12:35:44.236556       1 shared_informer.go:313] Waiting for caches to sync for node
	I0127 12:36:51.104162    9948 command_runner.go:130] ! I0127 12:35:44.261051       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0127 12:36:51.104162    9948 command_runner.go:130] ! I0127 12:35:44.261341       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0127 12:36:51.104162    9948 command_runner.go:130] ! I0127 12:35:44.261374       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0127 12:36:51.104162    9948 command_runner.go:130] ! I0127 12:35:44.314220       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0127 12:36:51.104311    9948 command_runner.go:130] ! I0127 12:35:44.314319       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0127 12:36:51.104311    9948 command_runner.go:130] ! I0127 12:35:44.314352       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0127 12:36:51.104347    9948 command_runner.go:130] ! I0127 12:35:44.364392       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0127 12:36:51.104347    9948 command_runner.go:130] ! I0127 12:35:44.364625       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0127 12:36:51.104347    9948 command_runner.go:130] ! I0127 12:35:44.365833       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0127 12:36:51.104347    9948 command_runner.go:130] ! I0127 12:35:44.365937       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0127 12:36:51.104347    9948 command_runner.go:130] ! I0127 12:35:44.365975       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:51.104430    9948 command_runner.go:130] ! I0127 12:35:44.365977       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:51.104430    9948 command_runner.go:130] ! I0127 12:35:44.367697       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0127 12:36:51.104465    9948 command_runner.go:130] ! I0127 12:35:44.368067       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0127 12:36:51.104465    9948 command_runner.go:130] ! I0127 12:35:44.368427       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:51.104465    9948 command_runner.go:130] ! I0127 12:35:44.369763       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0127 12:36:51.104556    9948 command_runner.go:130] ! I0127 12:35:44.370290       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0127 12:36:51.104556    9948 command_runner.go:130] ! I0127 12:35:44.370408       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0127 12:36:51.104556    9948 command_runner.go:130] ! I0127 12:35:44.370568       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:51.104556    9948 command_runner.go:130] ! I0127 12:35:44.412258       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0127 12:36:51.104626    9948 command_runner.go:130] ! I0127 12:35:44.412274       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0127 12:36:51.104626    9948 command_runner.go:130] ! I0127 12:35:44.412282       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0127 12:36:51.104701    9948 command_runner.go:130] ! I0127 12:35:44.412297       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0127 12:36:51.104701    9948 command_runner.go:130] ! I0127 12:35:44.412368       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0127 12:36:51.104701    9948 command_runner.go:130] ! I0127 12:35:44.412379       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0127 12:36:51.104701    9948 command_runner.go:130] ! I0127 12:35:44.517568       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0127 12:36:51.104701    9948 command_runner.go:130] ! I0127 12:35:44.517771       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0127 12:36:51.104701    9948 command_runner.go:130] ! I0127 12:35:44.518074       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0127 12:36:51.104701    9948 command_runner.go:130] ! I0127 12:35:44.518288       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0127 12:36:51.104801    9948 command_runner.go:130] ! I0127 12:35:44.564449       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0127 12:36:51.104801    9948 command_runner.go:130] ! I0127 12:35:44.564546       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0127 12:36:51.104801    9948 command_runner.go:130] ! I0127 12:35:44.564657       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0127 12:36:51.104801    9948 command_runner.go:130] ! I0127 12:35:44.591265       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0127 12:36:51.104801    9948 command_runner.go:130] ! I0127 12:35:44.663628       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.727283       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.739370       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000\" does not exist"
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.739797       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m02\" does not exist"
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.740184       1 shared_informer.go:320] Caches are synced for crt configmap
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.740835       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m03\" does not exist"
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.747985       1 shared_informer.go:320] Caches are synced for GC
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.748593       1 shared_informer.go:320] Caches are synced for job
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.765439       1 shared_informer.go:320] Caches are synced for cronjob
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.765669       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.765982       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.766264       1 shared_informer.go:320] Caches are synced for expand
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.766617       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.767305       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.767462       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.768217       1 shared_informer.go:320] Caches are synced for stateful set
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.766681       1 shared_informer.go:320] Caches are synced for daemon sets
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.774887       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.775167       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.775269       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.775418       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.778028       1 shared_informer.go:320] Caches are synced for HPA
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.793610       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.793916       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.798773       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.805302       1 shared_informer.go:320] Caches are synced for PVC protection
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.805404       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.806234       1 shared_informer.go:320] Caches are synced for PV protection
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.811621       1 shared_informer.go:320] Caches are synced for ephemeral
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.813099       1 shared_informer.go:320] Caches are synced for TTL
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.813420       1 shared_informer.go:320] Caches are synced for namespace
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.813655       1 shared_informer.go:320] Caches are synced for deployment
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.815238       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.819201       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.819433       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.820006       1 shared_informer.go:320] Caches are synced for disruption
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.821695       1 shared_informer.go:320] Caches are synced for taint
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.821905       1 shared_informer.go:320] Caches are synced for endpoint
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.824479       1 shared_informer.go:320] Caches are synced for persistent volume
	I0127 12:36:51.104876    9948 command_runner.go:130] ! I0127 12:35:44.824852       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0127 12:36:51.105456    9948 command_runner.go:130] ! I0127 12:35:44.825228       1 shared_informer.go:320] Caches are synced for attach detach
	I0127 12:36:51.105456    9948 command_runner.go:130] ! I0127 12:35:44.825784       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0127 12:36:51.105456    9948 command_runner.go:130] ! I0127 12:35:44.836209       1 shared_informer.go:320] Caches are synced for service account
	I0127 12:36:51.105456    9948 command_runner.go:130] ! I0127 12:35:44.836651       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0127 12:36:51.105531    9948 command_runner.go:130] ! I0127 12:35:44.836969       1 shared_informer.go:320] Caches are synced for node
	I0127 12:36:51.105531    9948 command_runner.go:130] ! I0127 12:35:44.838015       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0127 12:36:51.105531    9948 command_runner.go:130] ! I0127 12:35:44.838049       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0127 12:36:51.105531    9948 command_runner.go:130] ! I0127 12:35:44.838058       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0127 12:36:51.105531    9948 command_runner.go:130] ! I0127 12:35:44.838065       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0127 12:36:51.105619    9948 command_runner.go:130] ! I0127 12:35:44.838200       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.105643    9948 command_runner.go:130] ! I0127 12:35:44.838217       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.105675    9948 command_runner.go:130] ! I0127 12:35:44.838227       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.105675    9948 command_runner.go:130] ! I0127 12:35:44.844908       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:36:51.105711    9948 command_runner.go:130] ! I0127 12:35:44.845551       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0127 12:36:51.105711    9948 command_runner.go:130] ! I0127 12:35:44.845777       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0127 12:36:51.105747    9948 command_runner.go:130] ! I0127 12:35:44.898551       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.105747    9948 command_runner.go:130] ! I0127 12:35:44.899476       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.105747    9948 command_runner.go:130] ! I0127 12:35:44.900201       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000"
	I0127 12:36:51.105805    9948 command_runner.go:130] ! I0127 12:35:44.900496       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000-m02"
	I0127 12:36:51.105805    9948 command_runner.go:130] ! I0127 12:35:44.900687       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000-m03"
	I0127 12:36:51.105877    9948 command_runner.go:130] ! I0127 12:35:44.901405       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0127 12:36:51.105877    9948 command_runner.go:130] ! I0127 12:35:44.984858       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.105920    9948 command_runner.go:130] ! I0127 12:35:45.000632       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="180.930208ms"
	I0127 12:36:51.105920    9948 command_runner.go:130] ! I0127 12:35:45.003909       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="39.2µs"
	I0127 12:36:51.105920    9948 command_runner.go:130] ! I0127 12:35:45.016382       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="195.414857ms"
	I0127 12:36:51.105986    9948 command_runner.go:130] ! I0127 12:35:45.016698       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="108.2µs"
	I0127 12:36:51.105986    9948 command_runner.go:130] ! I0127 12:35:54.975850       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.105986    9948 command_runner.go:130] ! I0127 12:36:32.834093       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:51.106046    9948 command_runner.go:130] ! I0127 12:36:32.834425       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.106046    9948 command_runner.go:130] ! I0127 12:36:32.855708       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.106093    9948 command_runner.go:130] ! I0127 12:36:34.928482       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.106118    9948 command_runner.go:130] ! I0127 12:36:34.940809       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.106149    9948 command_runner.go:130] ! I0127 12:36:34.955742       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.106176    9948 command_runner.go:130] ! I0127 12:36:35.025877       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="15.32946ms"
	I0127 12:36:51.106176    9948 command_runner.go:130] ! I0127 12:36:35.026020       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="30.3µs"
	I0127 12:36:51.106176    9948 command_runner.go:130] ! I0127 12:36:40.041357       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.106176    9948 command_runner.go:130] ! I0127 12:36:47.580904       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="50.8µs"
	I0127 12:36:51.106176    9948 command_runner.go:130] ! I0127 12:36:48.616631       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="19.328909ms"
	I0127 12:36:51.106176    9948 command_runner.go:130] ! I0127 12:36:48.617909       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="35.8µs"
	I0127 12:36:51.106176    9948 command_runner.go:130] ! I0127 12:36:48.650691       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="23.414753ms"
	I0127 12:36:51.106176    9948 command_runner.go:130] ! I0127 12:36:48.651163       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="28.701µs"
	I0127 12:36:51.123876    9948 logs.go:123] Gathering logs for kube-controller-manager [e07a66f8f619] ...
	I0127 12:36:51.123876    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e07a66f8f619"
	I0127 12:36:51.168168    9948 command_runner.go:130] ! I0127 12:11:53.668834       1 serving.go:386] Generated self-signed cert in-memory
	I0127 12:36:51.168267    9948 command_runner.go:130] ! I0127 12:11:53.986868       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0127 12:36:51.168267    9948 command_runner.go:130] ! I0127 12:11:53.987309       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:51.168267    9948 command_runner.go:130] ! I0127 12:11:53.989401       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0127 12:36:51.168267    9948 command_runner.go:130] ! I0127 12:11:53.990012       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:51.168267    9948 command_runner.go:130] ! I0127 12:11:53.990187       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 12:36:51.168267    9948 command_runner.go:130] ! I0127 12:11:53.990322       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:51.168267    9948 command_runner.go:130] ! I0127 12:11:58.581695       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0127 12:36:51.168267    9948 command_runner.go:130] ! I0127 12:11:58.581741       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0127 12:36:51.168267    9948 command_runner.go:130] ! I0127 12:11:58.615284       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0127 12:36:51.168267    9948 command_runner.go:130] ! I0127 12:11:58.615497       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0127 12:36:51.168436    9948 command_runner.go:130] ! I0127 12:11:58.615545       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0127 12:36:51.168436    9948 command_runner.go:130] ! I0127 12:11:58.626456       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0127 12:36:51.168436    9948 command_runner.go:130] ! I0127 12:11:58.626896       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0127 12:36:51.168436    9948 command_runner.go:130] ! I0127 12:11:58.626952       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0127 12:36:51.168515    9948 command_runner.go:130] ! I0127 12:11:58.636784       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0127 12:36:51.168515    9948 command_runner.go:130] ! I0127 12:11:58.636866       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0127 12:36:51.168578    9948 command_runner.go:130] ! I0127 12:11:58.637077       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0127 12:36:51.168633    9948 command_runner.go:130] ! I0127 12:11:58.637108       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0127 12:36:51.168633    9948 command_runner.go:130] ! I0127 12:11:58.649619       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0127 12:36:51.168667    9948 command_runner.go:130] ! I0127 12:11:58.649750       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0127 12:36:51.168690    9948 command_runner.go:130] ! I0127 12:11:58.649765       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0127 12:36:51.168690    9948 command_runner.go:130] ! I0127 12:11:58.650223       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0127 12:36:51.168690    9948 command_runner.go:130] ! I0127 12:11:58.650457       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0127 12:36:51.168747    9948 command_runner.go:130] ! I0127 12:11:58.682646       1 shared_informer.go:320] Caches are synced for tokens
	I0127 12:36:51.168747    9948 command_runner.go:130] ! I0127 12:11:58.684061       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0127 12:36:51.168747    9948 command_runner.go:130] ! I0127 12:11:58.684098       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0127 12:36:51.168812    9948 command_runner.go:130] ! I0127 12:11:58.698781       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.699001       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.699050       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.699060       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.720187       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.720450       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.725202       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.736652       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.737667       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.738017       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.758863       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.759137       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.759589       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.759751       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.778737       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.779301       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.794263       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.805098       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.805155       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.805917       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.889766       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.889864       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:58.889880       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:59.169736       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:59.169792       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:59.169804       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:59.292507       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:59.292665       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:59.292680       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:59.451231       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:59.451328       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:59.451387       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:59.451649       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:59.594702       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0127 12:36:51.168841    9948 command_runner.go:130] ! I0127 12:11:59.594829       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0127 12:36:51.169378    9948 command_runner.go:130] ! I0127 12:11:59.595498       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0127 12:36:51.169378    9948 command_runner.go:130] ! I0127 12:11:59.595889       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0127 12:36:51.169378    9948 command_runner.go:130] ! I0127 12:11:59.744969       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0127 12:36:51.169378    9948 command_runner.go:130] ! I0127 12:11:59.745617       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0127 12:36:51.169378    9948 command_runner.go:130] ! I0127 12:11:59.745871       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0127 12:36:51.169473    9948 command_runner.go:130] ! I0127 12:11:59.892444       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0127 12:36:51.169473    9948 command_runner.go:130] ! I0127 12:11:59.892907       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0127 12:36:51.169473    9948 command_runner.go:130] ! I0127 12:11:59.893093       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0127 12:36:51.169473    9948 command_runner.go:130] ! I0127 12:12:00.136328       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0127 12:36:51.169473    9948 command_runner.go:130] ! I0127 12:12:00.136634       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0127 12:36:51.169547    9948 command_runner.go:130] ! I0127 12:12:00.136654       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0127 12:36:51.169547    9948 command_runner.go:130] ! I0127 12:12:00.136681       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0127 12:36:51.169547    9948 command_runner.go:130] ! I0127 12:12:00.425858       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0127 12:36:51.169547    9948 command_runner.go:130] ! I0127 12:12:00.426027       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0127 12:36:51.169613    9948 command_runner.go:130] ! I0127 12:12:00.426047       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0127 12:36:51.169642    9948 command_runner.go:130] ! I0127 12:12:00.426160       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0127 12:36:51.169642    9948 command_runner.go:130] ! I0127 12:12:00.426327       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0127 12:36:51.169642    9948 command_runner.go:130] ! I0127 12:12:00.426356       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0127 12:36:51.169642    9948 command_runner.go:130] ! I0127 12:12:00.685414       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0127 12:36:51.169708    9948 command_runner.go:130] ! I0127 12:12:00.685471       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0127 12:36:51.169708    9948 command_runner.go:130] ! I0127 12:12:00.685482       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0127 12:36:51.169708    9948 command_runner.go:130] ! I0127 12:12:00.841490       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0127 12:36:51.169708    9948 command_runner.go:130] ! I0127 12:12:00.841888       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0127 12:36:51.169708    9948 command_runner.go:130] ! I0127 12:12:00.841953       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0127 12:36:51.169708    9948 command_runner.go:130] ! I0127 12:12:00.888027       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0127 12:36:51.169815    9948 command_runner.go:130] ! I0127 12:12:00.888135       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0127 12:36:51.169815    9948 command_runner.go:130] ! I0127 12:12:00.888174       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:51.169815    9948 command_runner.go:130] ! I0127 12:12:00.889767       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0127 12:36:51.169883    9948 command_runner.go:130] ! I0127 12:12:00.889893       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0127 12:36:51.169883    9948 command_runner.go:130] ! I0127 12:12:00.889957       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:00.890020       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:00.890047       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:00.890072       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:00.890079       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:00.890101       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:00.890256       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:00.890391       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.042988       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.043513       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.043602       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.043761       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0127 12:36:51.169935    9948 command_runner.go:130] ! W0127 12:12:01.189051       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.192613       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.192663       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.193062       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.193147       1 shared_informer.go:313] Waiting for caches to sync for node
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.493812       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.493885       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.493919       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.494208       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.494371       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.494391       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.494413       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.494456       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.494473       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0127 12:36:51.169935    9948 command_runner.go:130] ! I0127 12:12:01.494487       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0127 12:36:51.170470    9948 command_runner.go:130] ! I0127 12:12:01.494531       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0127 12:36:51.170470    9948 command_runner.go:130] ! I0127 12:12:01.494547       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0127 12:36:51.170470    9948 command_runner.go:130] ! I0127 12:12:01.494617       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0127 12:36:51.170470    9948 command_runner.go:130] ! I0127 12:12:01.494687       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0127 12:36:51.170576    9948 command_runner.go:130] ! I0127 12:12:01.494717       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0127 12:36:51.170576    9948 command_runner.go:130] ! I0127 12:12:01.494749       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0127 12:36:51.170647    9948 command_runner.go:130] ! I0127 12:12:01.494763       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0127 12:36:51.170647    9948 command_runner.go:130] ! I0127 12:12:01.494781       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0127 12:36:51.170647    9948 command_runner.go:130] ! I0127 12:12:01.494815       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0127 12:36:51.170734    9948 command_runner.go:130] ! I0127 12:12:01.494890       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0127 12:36:51.170734    9948 command_runner.go:130] ! I0127 12:12:01.495196       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0127 12:36:51.170734    9948 command_runner.go:130] ! I0127 12:12:01.495268       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0127 12:36:51.170804    9948 command_runner.go:130] ! I0127 12:12:01.495404       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0127 12:36:51.170804    9948 command_runner.go:130] ! I0127 12:12:01.495519       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0127 12:36:51.170804    9948 command_runner.go:130] ! I0127 12:12:01.640900       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0127 12:36:51.170804    9948 command_runner.go:130] ! I0127 12:12:01.641423       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0127 12:36:51.170905    9948 command_runner.go:130] ! I0127 12:12:01.641492       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:01.789671       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:01.790209       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:01.790224       1 shared_informer.go:313] Waiting for caches to sync for job
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:01.939873       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:01.940295       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:01.940375       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.099155       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.099654       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.099741       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.240427       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.240688       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.240725       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.390343       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.390438       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.390450       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.539643       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.539766       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.539778       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.691835       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.691969       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.739108       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.739143       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.739157       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.739400       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.739775       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.740069       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.890126       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0127 12:36:51.170934    9948 command_runner.go:130] ! I0127 12:12:02.890235       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0127 12:36:51.171497    9948 command_runner.go:130] ! I0127 12:12:02.890247       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0127 12:36:51.171497    9948 command_runner.go:130] ! I0127 12:12:03.040125       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0127 12:36:51.171497    9948 command_runner.go:130] ! I0127 12:12:03.040770       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0127 12:36:51.171497    9948 command_runner.go:130] ! I0127 12:12:03.040983       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0127 12:36:51.171497    9948 command_runner.go:130] ! I0127 12:12:03.063768       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0127 12:36:51.171594    9948 command_runner.go:130] ! I0127 12:12:03.092877       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0127 12:36:51.171594    9948 command_runner.go:130] ! I0127 12:12:03.093448       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0127 12:36:51.171594    9948 command_runner.go:130] ! I0127 12:12:03.110720       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000\" does not exist"
	I0127 12:36:51.171655    9948 command_runner.go:130] ! I0127 12:12:03.126986       1 shared_informer.go:320] Caches are synced for endpoint
	I0127 12:36:51.171679    9948 command_runner.go:130] ! I0127 12:12:03.127087       1 shared_informer.go:320] Caches are synced for taint
	I0127 12:36:51.171679    9948 command_runner.go:130] ! I0127 12:12:03.127203       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.127313       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000"
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.127524       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.137503       1 shared_informer.go:320] Caches are synced for service account
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.137554       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.138208       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.138217       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.138352       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.141127       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.141405       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.141415       1 shared_informer.go:320] Caches are synced for TTL
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.141424       1 shared_informer.go:320] Caches are synced for ephemeral
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.141607       1 shared_informer.go:320] Caches are synced for daemon sets
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.141617       1 shared_informer.go:320] Caches are synced for stateful set
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.142442       1 shared_informer.go:320] Caches are synced for cronjob
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.146511       1 shared_informer.go:320] Caches are synced for persistent volume
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.150765       1 shared_informer.go:320] Caches are synced for expand
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.152122       1 shared_informer.go:320] Caches are synced for PVC protection
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.160180       1 shared_informer.go:320] Caches are synced for GC
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.164570       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.170520       1 shared_informer.go:320] Caches are synced for namespace
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.185040       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.186131       1 shared_informer.go:320] Caches are synced for HPA
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.188683       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.191196       1 shared_informer.go:320] Caches are synced for crt configmap
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.192089       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.192497       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.192682       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.192862       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.193013       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.193030       1 shared_informer.go:320] Caches are synced for job
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.193151       1 shared_informer.go:320] Caches are synced for deployment
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.193982       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.194157       1 shared_informer.go:320] Caches are synced for node
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.194244       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.194281       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.194310       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.194318       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.194846       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0127 12:36:51.171708    9948 command_runner.go:130] ! I0127 12:12:03.196614       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:36:51.172238    9948 command_runner.go:130] ! I0127 12:12:03.197111       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0127 12:36:51.172238    9948 command_runner.go:130] ! I0127 12:12:03.197095       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0127 12:36:51.172278    9948 command_runner.go:130] ! I0127 12:12:03.199168       1 shared_informer.go:320] Caches are synced for disruption
	I0127 12:36:51.172278    9948 command_runner.go:130] ! I0127 12:12:03.200153       1 shared_informer.go:320] Caches are synced for attach detach
	I0127 12:36:51.172328    9948 command_runner.go:130] ! I0127 12:12:03.207229       1 shared_informer.go:320] Caches are synced for PV protection
	I0127 12:36:51.172328    9948 command_runner.go:130] ! I0127 12:12:03.214016       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-659000" podCIDRs=["10.244.0.0/24"]
	I0127 12:36:51.172362    9948 command_runner.go:130] ! I0127 12:12:03.214057       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.172362    9948 command_runner.go:130] ! I0127 12:12:03.214083       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.172390    9948 command_runner.go:130] ! I0127 12:12:03.216325       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0127 12:36:51.172424    9948 command_runner.go:130] ! I0127 12:12:03.840748       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.172424    9948 command_runner.go:130] ! I0127 12:12:04.356274       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="345.711056ms"
	I0127 12:36:51.172453    9948 command_runner.go:130] ! I0127 12:12:04.454747       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="97.841105ms"
	I0127 12:36:51.172479    9948 command_runner.go:130] ! I0127 12:12:04.534437       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="79.56576ms"
	I0127 12:36:51.172498    9948 command_runner.go:130] ! I0127 12:12:04.576528       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="41.959673ms"
	I0127 12:36:51.172554    9948 command_runner.go:130] ! I0127 12:12:04.576771       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="53.3µs"
	I0127 12:36:51.172586    9948 command_runner.go:130] ! I0127 12:12:26.045035       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.172625    9948 command_runner.go:130] ! I0127 12:12:26.074083       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.172625    9948 command_runner.go:130] ! I0127 12:12:26.085407       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="109.3µs"
	I0127 12:36:51.172625    9948 command_runner.go:130] ! I0127 12:12:26.129584       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="119.3µs"
	I0127 12:36:51.172681    9948 command_runner.go:130] ! I0127 12:12:27.964629       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="49.302µs"
	I0127 12:36:51.172703    9948 command_runner.go:130] ! I0127 12:12:28.020606       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="31.923176ms"
	I0127 12:36:51.172703    9948 command_runner.go:130] ! I0127 12:12:28.020971       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="110.703µs"
	I0127 12:36:51.172703    9948 command_runner.go:130] ! I0127 12:12:28.132341       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0127 12:36:51.172703    9948 command_runner.go:130] ! I0127 12:12:29.790464       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.172815    9948 command_runner.go:130] ! I0127 12:15:07.611410       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m02\" does not exist"
	I0127 12:36:51.172815    9948 command_runner.go:130] ! I0127 12:15:07.630009       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-659000-m02" podCIDRs=["10.244.1.0/24"]
	I0127 12:36:51.172873    9948 command_runner.go:130] ! I0127 12:15:07.631297       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.172873    9948 command_runner.go:130] ! I0127 12:15:07.631526       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.172873    9948 command_runner.go:130] ! I0127 12:15:07.655401       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.172873    9948 command_runner.go:130] ! I0127 12:15:07.883346       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.172954    9948 command_runner.go:130] ! I0127 12:15:08.169505       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000-m02"
	I0127 12:36:51.172954    9948 command_runner.go:130] ! I0127 12:15:08.255644       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.172954    9948 command_runner.go:130] ! I0127 12:15:08.418223       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.173007    9948 command_runner.go:130] ! I0127 12:15:17.811768       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.173007    9948 command_runner.go:130] ! I0127 12:15:36.752543       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:51.173007    9948 command_runner.go:130] ! I0127 12:15:36.753915       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.173090    9948 command_runner.go:130] ! I0127 12:15:36.769807       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.173090    9948 command_runner.go:130] ! I0127 12:15:38.199464       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.173090    9948 command_runner.go:130] ! I0127 12:15:38.449749       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.173165    9948 command_runner.go:130] ! I0127 12:16:02.550786       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="103.313802ms"
	I0127 12:36:51.173238    9948 command_runner.go:130] ! I0127 12:16:02.585867       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="34.67067ms"
	I0127 12:36:51.173238    9948 command_runner.go:130] ! I0127 12:16:02.586257       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="347.903µs"
	I0127 12:36:51.173238    9948 command_runner.go:130] ! I0127 12:16:02.588870       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="48.6µs"
	I0127 12:36:51.173238    9948 command_runner.go:130] ! I0127 12:16:05.434486       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="13.589639ms"
	I0127 12:36:51.173338    9948 command_runner.go:130] ! I0127 12:16:05.435765       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="54.401µs"
	I0127 12:36:51.173338    9948 command_runner.go:130] ! I0127 12:16:05.890170       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="9.003392ms"
	I0127 12:36:51.173338    9948 command_runner.go:130] ! I0127 12:16:05.890477       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="36.901µs"
	I0127 12:36:51.173338    9948 command_runner.go:130] ! I0127 12:16:09.305780       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.173409    9948 command_runner.go:130] ! I0127 12:16:33.434322       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.173433    9948 command_runner.go:130] ! I0127 12:19:26.820887       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.173433    9948 command_runner.go:130] ! I0127 12:19:54.916460       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:51.173483    9948 command_runner.go:130] ! I0127 12:19:54.917420       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m03\" does not exist"
	I0127 12:36:51.173649    9948 command_runner.go:130] ! I0127 12:19:54.965530       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-659000-m03" podCIDRs=["10.244.2.0/24"]
	I0127 12:36:51.173649    9948 command_runner.go:130] ! I0127 12:19:54.966061       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.173649    9948 command_runner.go:130] ! I0127 12:19:54.966297       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.173730    9948 command_runner.go:130] ! I0127 12:19:55.802981       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.173730    9948 command_runner.go:130] ! I0127 12:19:56.378698       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.173730    9948 command_runner.go:130] ! I0127 12:19:58.252320       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000-m03"
	I0127 12:36:51.173812    9948 command_runner.go:130] ! I0127 12:19:58.280410       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.173812    9948 command_runner.go:130] ! I0127 12:20:05.560777       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.173812    9948 command_runner.go:130] ! I0127 12:20:25.959831       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.173918    9948 command_runner.go:130] ! I0127 12:20:28.750598       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.173943    9948 command_runner.go:130] ! I0127 12:20:28.751325       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:51.173943    9948 command_runner.go:130] ! I0127 12:20:28.769163       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.173943    9948 command_runner.go:130] ! I0127 12:20:33.279397       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174005    9948 command_runner.go:130] ! I0127 12:23:26.795899       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.174060    9948 command_runner.go:130] ! I0127 12:24:32.956118       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.174060    9948 command_runner.go:130] ! I0127 12:25:42.001288       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174060    9948 command_runner.go:130] ! I0127 12:28:32.628178       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:51.174060    9948 command_runner.go:130] ! I0127 12:28:38.397672       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:51.174135    9948 command_runner.go:130] ! I0127 12:28:38.399092       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174135    9948 command_runner.go:130] ! I0127 12:28:38.428451       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174135    9948 command_runner.go:130] ! I0127 12:28:43.510900       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174208    9948 command_runner.go:130] ! I0127 12:29:38.000555       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:51.174231    9948 command_runner.go:130] ! I0127 12:30:52.866288       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:30:52.895359       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:30:58.140304       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:31:04.208510       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:31:04.209007       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m03\" does not exist"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:31:04.238560       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-659000-m03" podCIDRs=["10.244.3.0/24"]
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:31:04.238634       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! E0127 12:31:04.255963       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-659000-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-659000-m03" podCIDRs=["10.244.4.0/24"]
	I0127 12:36:51.174257    9948 command_runner.go:130] ! E0127 12:31:04.256068       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-659000-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! E0127 12:31:04.256109       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'multinode-659000-m03': failed to patch node CIDR: Node \"multinode-659000-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:31:04.256134       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:31:04.261242       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:31:04.513319       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:31:05.081710       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:31:08.523576       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:31:14.394811       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:31:22.407069       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:31:22.407472       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:31:22.419743       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:31:23.498434       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:33:08.544063       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:33:08.544656       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:33:08.574301       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.174257    9948 command_runner.go:130] ! I0127 12:33:13.661256       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:51.199393    9948 logs.go:123] Gathering logs for kube-apiserver [ea993630a310] ...
	I0127 12:36:51.199393    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea993630a310"
	I0127 12:36:51.228398    9948 command_runner.go:130] ! W0127 12:35:38.851605       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0127 12:36:51.229061    9948 command_runner.go:130] ! I0127 12:35:38.853397       1 options.go:238] external host was not specified, using 172.29.198.106
	I0127 12:36:51.229061    9948 command_runner.go:130] ! I0127 12:35:38.858160       1 server.go:143] Version: v1.32.1
	I0127 12:36:51.229061    9948 command_runner.go:130] ! I0127 12:35:38.858493       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:51.229061    9948 command_runner.go:130] ! I0127 12:35:39.798695       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0127 12:36:51.229527    9948 command_runner.go:130] ! I0127 12:35:39.843688       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0127 12:36:51.229683    9948 command_runner.go:130] ! I0127 12:35:39.853521       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0127 12:36:51.230435    9948 command_runner.go:130] ! I0127 12:35:39.853736       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0127 12:36:51.230435    9948 command_runner.go:130] ! I0127 12:35:39.854572       1 instance.go:233] Using reconciler: lease
	I0127 12:36:51.230435    9948 command_runner.go:130] ! I0127 12:35:39.914509       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0127 12:36:51.231160    9948 command_runner.go:130] ! W0127 12:35:39.914792       1 genericapiserver.go:767] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.231160    9948 command_runner.go:130] ! I0127 12:35:40.232206       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0127 12:36:51.231259    9948 command_runner.go:130] ! I0127 12:35:40.232893       1 apis.go:106] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0127 12:36:51.231259    9948 command_runner.go:130] ! I0127 12:35:40.488401       1 apis.go:106] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0127 12:36:51.231259    9948 command_runner.go:130] ! I0127 12:35:40.610998       1 apis.go:106] API group "resource.k8s.io" is not enabled, skipping.
	I0127 12:36:51.231259    9948 command_runner.go:130] ! I0127 12:35:40.646097       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0127 12:36:51.231346    9948 command_runner.go:130] ! W0127 12:35:40.646401       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.231370    9948 command_runner.go:130] ! W0127 12:35:40.646556       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:51.231398    9948 command_runner.go:130] ! I0127 12:35:40.647499       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0127 12:36:51.231398    9948 command_runner.go:130] ! W0127 12:35:40.647580       1 genericapiserver.go:767] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.231398    9948 command_runner.go:130] ! I0127 12:35:40.648520       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0127 12:36:51.231398    9948 command_runner.go:130] ! I0127 12:35:40.649666       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0127 12:36:51.231398    9948 command_runner.go:130] ! W0127 12:35:40.649756       1 genericapiserver.go:767] Skipping API autoscaling/v2beta1 because it has no resources.
	I0127 12:36:51.231398    9948 command_runner.go:130] ! W0127 12:35:40.649766       1 genericapiserver.go:767] Skipping API autoscaling/v2beta2 because it has no resources.
	I0127 12:36:51.231398    9948 command_runner.go:130] ! I0127 12:35:40.651998       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0127 12:36:51.231398    9948 command_runner.go:130] ! W0127 12:35:40.652100       1 genericapiserver.go:767] Skipping API batch/v1beta1 because it has no resources.
	I0127 12:36:51.231398    9948 command_runner.go:130] ! I0127 12:35:40.653327       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0127 12:36:51.231398    9948 command_runner.go:130] ! W0127 12:35:40.653629       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.231398    9948 command_runner.go:130] ! W0127 12:35:40.653645       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:51.231398    9948 command_runner.go:130] ! I0127 12:35:40.654270       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0127 12:36:51.231398    9948 command_runner.go:130] ! W0127 12:35:40.654362       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.231398    9948 command_runner.go:130] ! W0127 12:35:40.654371       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1alpha2 because it has no resources.
	I0127 12:36:51.231398    9948 command_runner.go:130] ! I0127 12:35:40.655349       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0127 12:36:51.231398    9948 command_runner.go:130] ! W0127 12:35:40.655494       1 genericapiserver.go:767] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.231398    9948 command_runner.go:130] ! I0127 12:35:40.657969       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0127 12:36:51.231398    9948 command_runner.go:130] ! W0127 12:35:40.658067       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.231398    9948 command_runner.go:130] ! W0127 12:35:40.658077       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:51.231935    9948 command_runner.go:130] ! I0127 12:35:40.658845       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0127 12:36:51.231935    9948 command_runner.go:130] ! W0127 12:35:40.658940       1 genericapiserver.go:767] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.231998    9948 command_runner.go:130] ! W0127 12:35:40.658951       1 genericapiserver.go:767] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:51.231998    9948 command_runner.go:130] ! I0127 12:35:40.660043       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0127 12:36:51.231998    9948 command_runner.go:130] ! W0127 12:35:40.660172       1 genericapiserver.go:767] Skipping API policy/v1beta1 because it has no resources.
	I0127 12:36:51.232059    9948 command_runner.go:130] ! I0127 12:35:40.662431       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0127 12:36:51.232078    9948 command_runner.go:130] ! W0127 12:35:40.662519       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.232078    9948 command_runner.go:130] ! W0127 12:35:40.662531       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:51.232078    9948 command_runner.go:130] ! I0127 12:35:40.663022       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0127 12:36:51.232078    9948 command_runner.go:130] ! W0127 12:35:40.663153       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.232078    9948 command_runner.go:130] ! W0127 12:35:40.663165       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:51.232174    9948 command_runner.go:130] ! I0127 12:35:40.666344       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0127 12:36:51.232174    9948 command_runner.go:130] ! W0127 12:35:40.666495       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.232174    9948 command_runner.go:130] ! W0127 12:35:40.666521       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:51.232230    9948 command_runner.go:130] ! I0127 12:35:40.668345       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0127 12:36:51.232254    9948 command_runner.go:130] ! W0127 12:35:40.668516       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta3 because it has no resources.
	I0127 12:36:51.232254    9948 command_runner.go:130] ! W0127 12:35:40.668527       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0127 12:36:51.232316    9948 command_runner.go:130] ! W0127 12:35:40.668531       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.232316    9948 command_runner.go:130] ! I0127 12:35:40.673502       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0127 12:36:51.232316    9948 command_runner.go:130] ! W0127 12:35:40.673587       1 genericapiserver.go:767] Skipping API apps/v1beta2 because it has no resources.
	I0127 12:36:51.232316    9948 command_runner.go:130] ! W0127 12:35:40.673597       1 genericapiserver.go:767] Skipping API apps/v1beta1 because it has no resources.
	I0127 12:36:51.232370    9948 command_runner.go:130] ! I0127 12:35:40.676193       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0127 12:36:51.232397    9948 command_runner.go:130] ! W0127 12:35:40.676284       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.232397    9948 command_runner.go:130] ! W0127 12:35:40.676294       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:51.232397    9948 command_runner.go:130] ! I0127 12:35:40.677186       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0127 12:36:51.232397    9948 command_runner.go:130] ! W0127 12:35:40.677276       1 genericapiserver.go:767] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.232457    9948 command_runner.go:130] ! I0127 12:35:40.688978       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0127 12:36:51.232457    9948 command_runner.go:130] ! W0127 12:35:40.689072       1 genericapiserver.go:767] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:51.232537    9948 command_runner.go:130] ! I0127 12:35:41.320439       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.320849       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.321234       1 secure_serving.go:213] Serving securely on [::]:8443
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.321512       1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.324372       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.325924       1 controller.go:119] Starting legacy_token_tracking_controller
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.326193       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.327573       1 remote_available_controller.go:411] Starting RemoteAvailability controller
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.328217       1 cache.go:32] Waiting for caches to sync for RemoteAvailability controller
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.328319       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.329060       1 cluster_authentication_trust_controller.go:462] Starting cluster_authentication_trust_controller controller
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.329095       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.329225       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.329996       1 controller.go:78] Starting OpenAPI AggregationController
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.330057       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.330085       1 apf_controller.go:377] Starting API Priority and Fairness config controller
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.330333       1 apiservice_controller.go:100] Starting APIServiceRegistrationController
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.330379       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.331391       1 customresource_discovery_controller.go:292] Starting DiscoveryController
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.331485       1 dynamic_serving_content.go:135] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.327929       1 local_available_controller.go:156] Starting LocalAvailability controller
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.333671       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.333703       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.333958       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.335863       1 controller.go:142] Starting OpenAPI controller
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.336704       1 controller.go:90] Starting OpenAPI V3 controller
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.336831       1 naming_controller.go:294] Starting NamingConditionController
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.337057       1 establishing_controller.go:81] Starting EstablishingController
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.337215       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.337324       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.337408       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.327968       1 aggregator.go:169] waiting for initial CRD sync...
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.387084       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.387441       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.450926       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.451366       1 policy_source.go:240] refreshing policies
	I0127 12:36:51.232563    9948 command_runner.go:130] ! I0127 12:35:41.488750       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0127 12:36:51.233093    9948 command_runner.go:130] ! I0127 12:35:41.488990       1 aggregator.go:171] initial CRD sync complete...
	I0127 12:36:51.233093    9948 command_runner.go:130] ! I0127 12:35:41.489245       1 autoregister_controller.go:144] Starting autoregister controller
	I0127 12:36:51.233093    9948 command_runner.go:130] ! I0127 12:35:41.489480       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:41.489653       1 cache.go:39] Caches are synced for autoregister controller
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:41.499151       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:41.527390       1 shared_informer.go:320] Caches are synced for configmaps
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:41.528625       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:41.529892       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:41.530639       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:41.531604       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:41.531638       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:41.534721       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:41.540933       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:41.545944       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:42.357869       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:42.374307       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0127 12:36:51.233136    9948 command_runner.go:130] ! W0127 12:35:43.074223       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.29.198.106]
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:43.075938       1 controller.go:615] quota admission added evaluator for: endpoints
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:43.085006       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:44.603084       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:44.989601       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:45.141450       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:45.327075       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 12:36:51.233136    9948 command_runner.go:130] ! I0127 12:35:45.338333       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 12:36:51.241939    9948 logs.go:123] Gathering logs for coredns [b3a9ed6e130c] ...
	I0127 12:36:51.242464    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a9ed6e130c"
	I0127 12:36:51.269635    9948 command_runner.go:130] > .:53
	I0127 12:36:51.269635    9948 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 5e2e325279dfa828a8fd1b44d83ab4703abb0247d4beadde42157147650fe687c0862eaa4caa15a5d9139c48c9a9dd5ec3cd962ba60368e8ffb4d02ae4d29aeb
	I0127 12:36:51.269635    9948 command_runner.go:130] > CoreDNS-1.11.3
	I0127 12:36:51.269635    9948 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0127 12:36:51.269816    9948 command_runner.go:130] > [INFO] 127.0.0.1:47464 - 34099 "HINFO IN 5313391549706874198.1206200090770907475. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.062040871s
	I0127 12:36:51.270073    9948 logs.go:123] Gathering logs for kube-scheduler [a16e06a03860] ...
	I0127 12:36:51.270073    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a16e06a03860"
	I0127 12:36:51.298154    9948 command_runner.go:130] ! I0127 12:11:54.280431       1 serving.go:386] Generated self-signed cert in-memory
	I0127 12:36:51.298154    9948 command_runner.go:130] ! W0127 12:11:55.581187       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.581309       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.581382       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.581390       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 12:36:51.299138    9948 command_runner.go:130] ! I0127 12:11:55.694969       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! I0127 12:11:55.695193       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:51.299138    9948 command_runner.go:130] ! I0127 12:11:55.700077       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0127 12:36:51.299138    9948 command_runner.go:130] ! I0127 12:11:55.700446       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! I0127 12:11:55.700992       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:36:51.299138    9948 command_runner.go:130] ! I0127 12:11:55.701410       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.715521       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:51.299138    9948 command_runner.go:130] ! E0127 12:11:55.717196       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.717649       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0127 12:36:51.299138    9948 command_runner.go:130] ! E0127 12:11:55.717921       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.718583       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0127 12:36:51.299138    9948 command_runner.go:130] ! E0127 12:11:55.718820       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.728298       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! E0127 12:11:55.728648       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.729000       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0127 12:36:51.299138    9948 command_runner.go:130] ! E0127 12:11:55.729243       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.729633       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0127 12:36:51.299138    9948 command_runner.go:130] ! E0127 12:11:55.730380       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.729677       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0127 12:36:51.299138    9948 command_runner.go:130] ! E0127 12:11:55.730837       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.729713       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.729749       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:51.299138    9948 command_runner.go:130] ! E0127 12:11:55.731479       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.729782       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:51.299138    9948 command_runner.go:130] ! E0127 12:11:55.732242       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.729811       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:51.299138    9948 command_runner.go:130] ! E0127 12:11:55.734240       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! E0127 12:11:55.734704       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.738077       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0127 12:36:51.299138    9948 command_runner.go:130] ! E0127 12:11:55.738873       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.739202       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0127 12:36:51.299138    9948 command_runner.go:130] ! E0127 12:11:55.739366       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.299138    9948 command_runner.go:130] ! W0127 12:11:55.739719       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0127 12:36:51.300135    9948 command_runner.go:130] ! E0127 12:11:55.739865       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.300135    9948 command_runner.go:130] ! W0127 12:11:55.740221       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:55.740378       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:55.740608       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:55.740761       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:56.556598       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:56.557622       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:56.595830       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:56.596047       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:56.691826       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:56.691909       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:56.806048       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:56.806109       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:56.846817       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:56.847194       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:56.871314       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:56.872178       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:56.887386       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:56.887549       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:56.918642       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:56.919135       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:57.039216       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:57.039707       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:57.055169       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:57.055233       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:57.106656       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:57.106828       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:57.214186       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:57.214290       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:57.298150       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0127 12:36:51.302151    9948 command_runner.go:130] ! E0127 12:11:57.298337       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.302151    9948 command_runner.go:130] ! W0127 12:11:57.310098       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0127 12:36:51.303142    9948 command_runner.go:130] ! E0127 12:11:57.310312       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.303142    9948 command_runner.go:130] ! W0127 12:11:57.312117       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:51.303142    9948 command_runner.go:130] ! E0127 12:11:57.312192       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.303142    9948 command_runner.go:130] ! W0127 12:11:57.321525       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0127 12:36:51.303142    9948 command_runner.go:130] ! E0127 12:11:57.321832       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:51.303142    9948 command_runner.go:130] ! I0127 12:11:59.701790       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:36:51.303142    9948 command_runner.go:130] ! I0127 12:33:15.443053       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0127 12:36:51.303142    9948 command_runner.go:130] ! I0127 12:33:15.443143       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0127 12:36:51.303142    9948 command_runner.go:130] ! I0127 12:33:15.452458       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 12:36:51.303142    9948 command_runner.go:130] ! E0127 12:33:15.487412       1 run.go:72] "command failed" err="finished without leader elect"
	I0127 12:36:51.314139    9948 logs.go:123] Gathering logs for kindnet [373bec67270f] ...
	I0127 12:36:51.314139    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 373bec67270f"
	I0127 12:36:51.347179    9948 command_runner.go:130] ! I0127 12:35:44.464092       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0127 12:36:51.347179    9948 command_runner.go:130] ! I0127 12:35:44.489651       1 main.go:139] hostIP = 172.29.198.106
	I0127 12:36:51.347261    9948 command_runner.go:130] ! podIP = 172.29.198.106
	I0127 12:36:51.347261    9948 command_runner.go:130] ! I0127 12:35:44.489794       1 main.go:148] setting mtu 1500 for CNI 
	I0127 12:36:51.347261    9948 command_runner.go:130] ! I0127 12:35:44.489865       1 main.go:178] kindnetd IP family: "ipv4"
	I0127 12:36:51.347261    9948 command_runner.go:130] ! I0127 12:35:44.490024       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0127 12:36:51.347261    9948 command_runner.go:130] ! I0127 12:35:45.397363       1 main.go:239] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-40: Error: Could not process rule: Operation not supported
	I0127 12:36:51.347323    9948 command_runner.go:130] ! add table inet kindnet-network-policies
	I0127 12:36:51.347323    9948 command_runner.go:130] ! ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:51.347323    9948 command_runner.go:130] ! , skipping network policies
	I0127 12:36:51.347373    9948 command_runner.go:130] ! W0127 12:36:15.407551       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0127 12:36:51.347373    9948 command_runner.go:130] ! E0127 12:36:15.407870       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError"
	I0127 12:36:51.347373    9948 command_runner.go:130] ! I0127 12:36:25.405793       1 main.go:297] Handling node with IPs: map[172.29.198.106:{}]
	I0127 12:36:51.347457    9948 command_runner.go:130] ! I0127 12:36:25.405967       1 main.go:301] handling current node
	I0127 12:36:51.347457    9948 command_runner.go:130] ! I0127 12:36:25.406822       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:51.347457    9948 command_runner.go:130] ! I0127 12:36:25.406903       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:51.347457    9948 command_runner.go:130] ! I0127 12:36:25.408014       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.29.199.129 Flags: [] Table: 0 Realm: 0} 
	I0127 12:36:51.347508    9948 command_runner.go:130] ! I0127 12:36:25.408956       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:51.347549    9948 command_runner.go:130] ! I0127 12:36:25.409055       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:51.347549    9948 command_runner.go:130] ! I0127 12:36:25.409321       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.29.206.88 Flags: [] Table: 0 Realm: 0} 
	I0127 12:36:51.347620    9948 command_runner.go:130] ! I0127 12:36:35.400986       1 main.go:297] Handling node with IPs: map[172.29.198.106:{}]
	I0127 12:36:51.347620    9948 command_runner.go:130] ! I0127 12:36:35.401115       1 main.go:301] handling current node
	I0127 12:36:51.347620    9948 command_runner.go:130] ! I0127 12:36:35.401203       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:51.347620    9948 command_runner.go:130] ! I0127 12:36:35.401377       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:51.347620    9948 command_runner.go:130] ! I0127 12:36:35.401789       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:51.347701    9948 command_runner.go:130] ! I0127 12:36:35.401927       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:51.347701    9948 command_runner.go:130] ! I0127 12:36:45.400837       1 main.go:297] Handling node with IPs: map[172.29.198.106:{}]
	I0127 12:36:51.347723    9948 command_runner.go:130] ! I0127 12:36:45.401002       1 main.go:301] handling current node
	I0127 12:36:51.347723    9948 command_runner.go:130] ! I0127 12:36:45.401061       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:51.347748    9948 command_runner.go:130] ! I0127 12:36:45.401072       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:51.347748    9948 command_runner.go:130] ! I0127 12:36:45.401385       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:51.347748    9948 command_runner.go:130] ! I0127 12:36:45.401462       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:53.862498    9948 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:36:53.890742    9948 command_runner.go:130] > 2017
	I0127 12:36:53.890742    9948 api_server.go:72] duration metric: took 1m6.911385s to wait for apiserver process to appear ...
	I0127 12:36:53.890742    9948 api_server.go:88] waiting for apiserver healthz status ...
	I0127 12:36:53.899408    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 12:36:53.927140    9948 command_runner.go:130] > ea993630a310
	I0127 12:36:53.927244    9948 logs.go:282] 1 containers: [ea993630a310]
	I0127 12:36:53.936808    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 12:36:53.962367    9948 command_runner.go:130] > 0ef2a3b50bae
	I0127 12:36:53.962446    9948 logs.go:282] 1 containers: [0ef2a3b50bae]
	I0127 12:36:53.970030    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 12:36:53.993916    9948 command_runner.go:130] > b3a9ed6e130c
	I0127 12:36:53.993916    9948 command_runner.go:130] > f818dd15d8b0
	I0127 12:36:53.993916    9948 logs.go:282] 2 containers: [b3a9ed6e130c f818dd15d8b0]
	I0127 12:36:54.001905    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 12:36:54.027794    9948 command_runner.go:130] > ed51c7eaa966
	I0127 12:36:54.027794    9948 command_runner.go:130] > a16e06a03860
	I0127 12:36:54.027794    9948 logs.go:282] 2 containers: [ed51c7eaa966 a16e06a03860]
	I0127 12:36:54.034908    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 12:36:54.063526    9948 command_runner.go:130] > 0283b35dee3c
	I0127 12:36:54.063526    9948 command_runner.go:130] > bbec7ccef7da
	I0127 12:36:54.063526    9948 logs.go:282] 2 containers: [0283b35dee3c bbec7ccef7da]
	I0127 12:36:54.071374    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 12:36:54.099256    9948 command_runner.go:130] > 8d4872cda28d
	I0127 12:36:54.099337    9948 command_runner.go:130] > e07a66f8f619
	I0127 12:36:54.099337    9948 logs.go:282] 2 containers: [8d4872cda28d e07a66f8f619]
	I0127 12:36:54.108236    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0127 12:36:54.134342    9948 command_runner.go:130] > 373bec67270f
	I0127 12:36:54.134342    9948 command_runner.go:130] > d758000dda95
	I0127 12:36:54.134342    9948 logs.go:282] 2 containers: [373bec67270f d758000dda95]
	I0127 12:36:54.135331    9948 logs.go:123] Gathering logs for kubelet ...
	I0127 12:36:54.135331    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:32 multinode-659000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1507]: I0127 12:35:33.096330    1507 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1507]: I0127 12:35:33.097069    1507 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1507]: I0127 12:35:33.098504    1507 server.go:954] "Client rotation is on, will bootstrap in background"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1507]: E0127 12:35:33.099084    1507 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1565]: I0127 12:35:33.855505    1565 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1565]: I0127 12:35:33.856023    1565 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1565]: I0127 12:35:33.856456    1565 server.go:954] "Client rotation is on, will bootstrap in background"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1565]: E0127 12:35:33.856573    1565 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:34 multinode-659000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.167839    1648 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.168570    1648 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.169526    1648 server.go:954] "Client rotation is on, will bootstrap in background"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.171330    1648 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.190537    1648 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.208219    1648 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.208354    1648 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.217489    1648 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.217603    1648 server.go:841] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.218319    1648 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.218396    1648 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-659000","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.218720    1648 topology_manager.go:138] "Creating topology manager with none policy"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.218780    1648 container_manager_linux.go:304] "Creating device plugin manager"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.219430    1648 state_mem.go:36] "Initialized new in-memory state store"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.221396    1648 kubelet.go:446] "Attempting to sync node with API server"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.221465    1648 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.221524    1648 kubelet.go:352] "Adding apiserver pod source"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.221568    1648 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.230949    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-659000&limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.231085    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-659000&limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.232363    1648 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="docker" version="27.4.0" apiVersion="v1"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.236967    1648 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0127 12:36:54.171332    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.237190    1648 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.245589    1648 watchdog_linux.go:99] "Systemd watchdog is not enabled"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.245760    1648 server.go:1287] "Started kubelet"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.246317    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.246411    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.246814    1648 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.247495    1648 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.249106    1648 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.260914    1648 server.go:169] "Starting to listen" address="0.0.0.0" port=10250
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.262947    1648 server.go:490] "Adding debug handlers to kubelet server"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.264052    1648 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.267083    1648 volume_manager.go:297] "Starting Kubelet Volume Manager"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.267485    1648 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"multinode-659000\" not found"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.270946    1648 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.29.198.106:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-659000.181e8cd12d2fa1af  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-659000,UID:multinode-659000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-659000,},FirstTimestamp:2025-01-27 12:35:36.245739951 +0000 UTC m=+0.150414507,LastTimestamp:2025-01-27 12:35:36.245739951 +0000 UTC m=+0.150414507,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-6
59000,}"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.275270    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-659000?timeout=10s\": dial tcp 172.29.198.106:8443: connect: connection refused" interval="200ms"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.275715    1648 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.280615    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.280911    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.282354    1648 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.282424    1648 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.282441    1648 factory.go:221] Registration of the systemd container factory successfully
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.345823    1648 reconciler.go:26] "Reconciler: start to sync state"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.348883    1648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.352701    1648 cpu_manager.go:221] "Starting CPU manager" policy="none"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.352736    1648 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.352866    1648 state_mem.go:36] "Initialized new in-memory state store"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353577    1648 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353729    1648 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353769    1648 policy_none.go:49] "None policy: Start"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353902    1648 memory_manager.go:186] "Starting memorymanager" policy="None"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353967    1648 state_mem.go:35] "Initializing new in-memory state store"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.354751    1648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.354791    1648 status_manager.go:227] "Starting to sync pod status with apiserver"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.354811    1648 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.354819    1648 kubelet.go:2388] "Starting kubelet main sync loop"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.354862    1648 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.355393    1648 state_mem.go:75] "Updated machine memory state"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.358802    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.358857    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.371233    1648 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"multinode-659000\" not found"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.373395    1648 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.373786    1648 eviction_manager.go:189] "Eviction manager: starting control loop"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.373887    1648 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.380088    1648 iptables.go:577] "Could not set up iptables canary" err=<
	I0127 12:36:54.172345    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.380760    1648 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.380984    1648 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-659000\" not found"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.382902    1648 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.468172    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.468821    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c82c0ec4aeaa9b21462a8248326ae982d6f7a0aee31347f1a58d216f0335177"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.468934    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2d0bd65fe50d3b8a64acf8ee065aa49d1a51b768c5fe6fe9532d26fa35aa7b1"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.468988    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bd5bf99bede3e691e572fc4b8a37f4f42f8a9b2520adf8bc87bdf76e8258a4b"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.469050    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5423fc5113290b937df9b531c5fbd748c5d927fd5e170e8126b67bae6a814384"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.470252    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.475717    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.477090    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-659000?timeout=10s\": dial tcp 172.29.198.106:8443: connect: connection refused" interval="400ms"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.480196    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.198.106:8443: connect: connection refused" node="multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.487429    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc9ef8ee86ec2e354006c4c56f82fe9ec4df472096628ad620faba06fa0b1ff8"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.508448    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a53e133a1cd6ab9514cb15ac3c4f1d5683d17008b482cebb08bf4809e060709"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.523288    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="319cddeebceb6ec82b5865f1c67eaf88948a282ace1113869910f5bf8c717d83"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.545844    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b522c4c9f4c776ea35298b9eaf7c05d64bddd6f385e12252bdf6aada9a3e20d"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.566476    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e6c90fc43fa6c0754218ff1c4162045d-kubeconfig\") pod \"kube-scheduler-multinode-659000\" (UID: \"e6c90fc43fa6c0754218ff1c4162045d\") " pod="kube-system/kube-scheduler-multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.566534    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b9fbd177058ba298cde2a92c4ef5c601-k8s-certs\") pod \"kube-apiserver-multinode-659000\" (UID: \"b9fbd177058ba298cde2a92c4ef5c601\") " pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.566560    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-kubeconfig\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567472    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567527    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/575cefa3aa8017dce576fa244e719a4e-etcd-certs\") pod \"etcd-multinode-659000\" (UID: \"575cefa3aa8017dce576fa244e719a4e\") " pod="kube-system/etcd-multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567546    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/575cefa3aa8017dce576fa244e719a4e-etcd-data\") pod \"etcd-multinode-659000\" (UID: \"575cefa3aa8017dce576fa244e719a4e\") " pod="kube-system/etcd-multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567563    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b9fbd177058ba298cde2a92c4ef5c601-ca-certs\") pod \"kube-apiserver-multinode-659000\" (UID: \"b9fbd177058ba298cde2a92c4ef5c601\") " pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567580    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-ca-certs\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567687    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-flexvolume-dir\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567720    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-k8s-certs\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567745    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b9fbd177058ba298cde2a92c4ef5c601-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-659000\" (UID: \"b9fbd177058ba298cde2a92c4ef5c601\") " pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567166    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51ee4649b24aa281b3767c049c3c1d4063e516b98501648152da39ee45cb0b26"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.569350    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.570289    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.173335    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.681872    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.682569    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.198.106:8443: connect: connection refused" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.878668    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-659000?timeout=10s\": dial tcp 172.29.198.106:8443: connect: connection refused" interval="800ms"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: W0127 12:35:37.056372    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.056534    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: I0127 12:35:37.084276    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.085344    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.198.106:8443: connect: connection refused" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: W0127 12:35:37.281985    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.282078    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: W0127 12:35:37.629266    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-659000&limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.629409    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-659000&limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: W0127 12:35:37.673700    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.673876    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.680515    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-659000?timeout=10s\": dial tcp 172.29.198.106:8443: connect: connection refused" interval="1.6s"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: I0127 12:35:37.887498    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.888458    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.198.106:8443: connect: connection refused" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: E0127 12:35:39.058364    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: E0127 12:35:39.084210    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: E0127 12:35:39.099659    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: E0127 12:35:39.112572    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: I0127 12:35:39.489967    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:40 multinode-659000 kubelet[1648]: E0127 12:35:40.123734    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:40 multinode-659000 kubelet[1648]: E0127 12:35:40.124212    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:40 multinode-659000 kubelet[1648]: E0127 12:35:40.124507    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:40 multinode-659000 kubelet[1648]: E0127 12:35:40.124790    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.138584    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.139346    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.139719    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.469180    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.513020    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-multinode-659000\" already exists" pod="kube-system/etcd-multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.513064    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.538800    1648 kubelet_node_status.go:125] "Node was previously registered" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.538905    1648 kubelet_node_status.go:79] "Successfully registered node" node="multinode-659000"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.538949    1648 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.539897    1648 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0127 12:36:54.174330    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.540655    1648 setters.go:602] "Node became not ready" node="multinode-659000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-27T12:35:41Z","lastTransitionTime":"2025-01-27T12:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.555833    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-multinode-659000\" already exists" pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.555924    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.574323    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-multinode-659000\" already exists" pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.574484    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-multinode-659000"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.589698    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-multinode-659000\" already exists" pod="kube-system/kube-scheduler-multinode-659000"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.247993    1648 apiserver.go:52] "Watching apiserver"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.255092    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-659000" podUID="f19e9efc-57cc-4e2a-b365-920592a7f352"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.257281    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.257504    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.261197    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/etcd-multinode-659000" podUID="d2a9c448-86a1-48e3-8b48-345c937e5bb4"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.277187    1648 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.304401    1648 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/etcd-multinode-659000"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.304607    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-659000"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.309849    1648 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.309963    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.343249    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae3b8daf-d674-4cfe-8652-cb5ff6ba8615-lib-modules\") pod \"kube-proxy-s46mv\" (UID: \"ae3b8daf-d674-4cfe-8652-cb5ff6ba8615\") " pod="kube-system/kube-proxy-s46mv"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.343617    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9b617a9c-e2b8-45fd-bee2-45cb03d4cd42-cni-cfg\") pod \"kindnet-z2hqq\" (UID: \"9b617a9c-e2b8-45fd-bee2-45cb03d4cd42\") " pod="kube-system/kindnet-z2hqq"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.343779    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b617a9c-e2b8-45fd-bee2-45cb03d4cd42-lib-modules\") pod \"kindnet-z2hqq\" (UID: \"9b617a9c-e2b8-45fd-bee2-45cb03d4cd42\") " pod="kube-system/kindnet-z2hqq"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.343961    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae3b8daf-d674-4cfe-8652-cb5ff6ba8615-xtables-lock\") pod \"kube-proxy-s46mv\" (UID: \"ae3b8daf-d674-4cfe-8652-cb5ff6ba8615\") " pod="kube-system/kube-proxy-s46mv"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.344263    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b617a9c-e2b8-45fd-bee2-45cb03d4cd42-xtables-lock\") pod \"kindnet-z2hqq\" (UID: \"9b617a9c-e2b8-45fd-bee2-45cb03d4cd42\") " pod="kube-system/kindnet-z2hqq"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.344443    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bcfd7913-1bc0-4c24-882f-2be92ec9b046-tmp\") pod \"storage-provisioner\" (UID: \"bcfd7913-1bc0-4c24-882f-2be92ec9b046\") " pod="kube-system/storage-provisioner"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.345456    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.345573    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:42.845554363 +0000 UTC m=+6.750229019 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.362165    1648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bf31ca1befb4fb3e8f2fd27458a3b80" path="/var/lib/kubelet/pods/6bf31ca1befb4fb3e8f2fd27458a3b80/volumes"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.363294    1648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7291ea72d8be6e47ed8b536906d73549" path="/var/lib/kubelet/pods/7291ea72d8be6e47ed8b536906d73549/volumes"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.396667    1648 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.400478    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.400505    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.400550    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:42.900534148 +0000 UTC m=+6.805208804 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.494698    1648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-659000" podStartSLOduration=0.494540064 podStartE2EDuration="494.540064ms" podCreationTimestamp="2025-01-27 12:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 12:35:42.473709794 +0000 UTC m=+6.378384350" watchObservedRunningTime="2025-01-27 12:35:42.494540064 +0000 UTC m=+6.399214620"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.494964    1648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-659000" podStartSLOduration=0.494955765 podStartE2EDuration="494.955765ms" podCreationTimestamp="2025-01-27 12:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 12:35:42.493805361 +0000 UTC m=+6.398480017" watchObservedRunningTime="2025-01-27 12:35:42.494955765 +0000 UTC m=+6.399630321"
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.849608    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:54.175334    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.849827    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:43.849803559 +0000 UTC m=+7.754478115 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.951539    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.951579    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.951637    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:43.951620201 +0000 UTC m=+7.856294757 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: I0127 12:35:43.230846    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b613e9a7a356580fd5381e358408317fd6120a119c23f3f196adda302e5ca97f"
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: I0127 12:35:43.240666    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34d579bb511fec290478f20b13002063b43c1a71bd6f2f45f1d83bbd8ac971ab"
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.588436    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: I0127 12:35:43.594121    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d43e4cc62e0877d4b65191623d58195cd33c60eff33c6e49e605f69620d5115f"
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: I0127 12:35:43.594816    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-659000" podUID="f19e9efc-57cc-4e2a-b365-920592a7f352"
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.861607    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.861754    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:45.861734662 +0000 UTC m=+9.766409318 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.962791    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.962845    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.963033    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:45.962955102 +0000 UTC m=+9.867629758 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:44 multinode-659000 kubelet[1648]: E0127 12:35:44.356390    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.355639    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.883867    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.883991    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:49.883972962 +0000 UTC m=+13.788647618 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.984260    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.984313    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.984377    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:49.984359299 +0000 UTC m=+13.889033855 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:46 multinode-659000 kubelet[1648]: E0127 12:35:46.358731    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:46 multinode-659000 kubelet[1648]: E0127 12:35:46.386967    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:47 multinode-659000 kubelet[1648]: E0127 12:35:47.355582    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:48 multinode-659000 kubelet[1648]: E0127 12:35:48.356308    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:49 multinode-659000 kubelet[1648]: E0127 12:35:49.356027    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:49 multinode-659000 kubelet[1648]: E0127 12:35:49.925365    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:49 multinode-659000 kubelet[1648]: E0127 12:35:49.925459    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:57.925443152 +0000 UTC m=+21.830117808 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:50 multinode-659000 kubelet[1648]: E0127 12:35:50.027100    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:50 multinode-659000 kubelet[1648]: E0127 12:35:50.027219    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:50 multinode-659000 kubelet[1648]: E0127 12:35:50.027346    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:58.027289813 +0000 UTC m=+21.931964469 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.176338    9948 command_runner.go:130] > Jan 27 12:35:50 multinode-659000 kubelet[1648]: E0127 12:35:50.355319    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:51 multinode-659000 kubelet[1648]: E0127 12:35:51.356503    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:51 multinode-659000 kubelet[1648]: E0127 12:35:51.388594    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:52 multinode-659000 kubelet[1648]: E0127 12:35:52.357390    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:53 multinode-659000 kubelet[1648]: E0127 12:35:53.355568    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:54 multinode-659000 kubelet[1648]: E0127 12:35:54.355531    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:55 multinode-659000 kubelet[1648]: E0127 12:35:55.356228    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:56 multinode-659000 kubelet[1648]: E0127 12:35:56.355726    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:56 multinode-659000 kubelet[1648]: E0127 12:35:56.392446    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:57 multinode-659000 kubelet[1648]: E0127 12:35:57.355790    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.001233    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.001401    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:36:14.001383565 +0000 UTC m=+37.906058121 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.101493    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.101659    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.101748    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:36:14.101732786 +0000 UTC m=+38.006407342 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.365026    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:35:59 multinode-659000 kubelet[1648]: E0127 12:35:59.356031    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:36:00 multinode-659000 kubelet[1648]: E0127 12:36:00.356282    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:36:01 multinode-659000 kubelet[1648]: E0127 12:36:01.356209    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:36:01 multinode-659000 kubelet[1648]: E0127 12:36:01.394292    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:36:02 multinode-659000 kubelet[1648]: E0127 12:36:02.355777    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:36:03 multinode-659000 kubelet[1648]: E0127 12:36:03.356166    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:36:04 multinode-659000 kubelet[1648]: E0127 12:36:04.356089    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:36:05 multinode-659000 kubelet[1648]: E0127 12:36:05.355458    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:36:06 multinode-659000 kubelet[1648]: E0127 12:36:06.356120    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:36:06 multinode-659000 kubelet[1648]: E0127 12:36:06.396811    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:36:07 multinode-659000 kubelet[1648]: E0127 12:36:07.355573    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.177330    9948 command_runner.go:130] > Jan 27 12:36:08 multinode-659000 kubelet[1648]: E0127 12:36:08.355837    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:09 multinode-659000 kubelet[1648]: E0127 12:36:09.355284    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:10 multinode-659000 kubelet[1648]: E0127 12:36:10.356199    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:11 multinode-659000 kubelet[1648]: E0127 12:36:11.356023    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:11 multinode-659000 kubelet[1648]: E0127 12:36:11.398054    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:12 multinode-659000 kubelet[1648]: E0127 12:36:12.355492    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:13 multinode-659000 kubelet[1648]: E0127 12:36:13.356291    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.058689    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.058911    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:36:46.058858304 +0000 UTC m=+69.963532860 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.159091    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.159277    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.159495    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:36:46.15947175 +0000 UTC m=+70.064146406 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.357000    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:15 multinode-659000 kubelet[1648]: I0127 12:36:15.031682    1648 scope.go:117] "RemoveContainer" containerID="134620caeeb93fda5b32a71962e13d1994830a35b93b18ad2387296500dff7b5"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:15 multinode-659000 kubelet[1648]: I0127 12:36:15.032024    1648 scope.go:117] "RemoveContainer" containerID="9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:15 multinode-659000 kubelet[1648]: E0127 12:36:15.032236    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bcfd7913-1bc0-4c24-882f-2be92ec9b046)\"" pod="kube-system/storage-provisioner" podUID="bcfd7913-1bc0-4c24-882f-2be92ec9b046"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:15 multinode-659000 kubelet[1648]: E0127 12:36:15.355738    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:16 multinode-659000 kubelet[1648]: E0127 12:36:16.356191    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:16 multinode-659000 kubelet[1648]: E0127 12:36:16.399212    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:17 multinode-659000 kubelet[1648]: E0127 12:36:17.355082    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:18 multinode-659000 kubelet[1648]: E0127 12:36:18.356067    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:19 multinode-659000 kubelet[1648]: E0127 12:36:19.355675    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:20 multinode-659000 kubelet[1648]: E0127 12:36:20.356455    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:21 multinode-659000 kubelet[1648]: E0127 12:36:21.355971    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:21 multinode-659000 kubelet[1648]: E0127 12:36:21.401078    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:22 multinode-659000 kubelet[1648]: E0127 12:36:22.355954    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:23 multinode-659000 kubelet[1648]: E0127 12:36:23.355387    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:24 multinode-659000 kubelet[1648]: E0127 12:36:24.355437    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:25 multinode-659000 kubelet[1648]: E0127 12:36:25.356289    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:26 multinode-659000 kubelet[1648]: E0127 12:36:26.356493    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:26 multinode-659000 kubelet[1648]: E0127 12:36:26.402364    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:54.178331    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 kubelet[1648]: E0127 12:36:27.356407    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.179341    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 kubelet[1648]: I0127 12:36:27.357050    1648 scope.go:117] "RemoveContainer" containerID="9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f"
	I0127 12:36:54.179341    9948 command_runner.go:130] > Jan 27 12:36:28 multinode-659000 kubelet[1648]: E0127 12:36:28.356371    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.179341    9948 command_runner.go:130] > Jan 27 12:36:29 multinode-659000 kubelet[1648]: E0127 12:36:29.355555    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.179341    9948 command_runner.go:130] > Jan 27 12:36:30 multinode-659000 kubelet[1648]: E0127 12:36:30.356227    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:54.179341    9948 command_runner.go:130] > Jan 27 12:36:31 multinode-659000 kubelet[1648]: E0127 12:36:31.356043    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:54.179341    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]: I0127 12:36:36.363314    1648 scope.go:117] "RemoveContainer" containerID="5f274e5a8851d2aeb5403952c3fba0274fe53614e2e0995d1046693d7e725d5d"
	I0127 12:36:54.179341    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]: E0127 12:36:36.393311    1648 iptables.go:577] "Could not set up iptables canary" err=<
	I0127 12:36:54.179341    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0127 12:36:54.179341    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0127 12:36:54.179341    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0127 12:36:54.179341    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0127 12:36:54.179341    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]: I0127 12:36:36.409087    1648 scope.go:117] "RemoveContainer" containerID="f91e9c2d3ba64a6d34c9bab7c1953b46f4006e0bb493bd1ae993c489cd76e02c"
	I0127 12:36:54.227269    9948 logs.go:123] Gathering logs for kube-apiserver [ea993630a310] ...
	I0127 12:36:54.227269    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea993630a310"
	I0127 12:36:54.256885    9948 command_runner.go:130] ! W0127 12:35:38.851605       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0127 12:36:54.257701    9948 command_runner.go:130] ! I0127 12:35:38.853397       1 options.go:238] external host was not specified, using 172.29.198.106
	I0127 12:36:54.257701    9948 command_runner.go:130] ! I0127 12:35:38.858160       1 server.go:143] Version: v1.32.1
	I0127 12:36:54.257852    9948 command_runner.go:130] ! I0127 12:35:38.858493       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:54.257852    9948 command_runner.go:130] ! I0127 12:35:39.798695       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0127 12:36:54.257932    9948 command_runner.go:130] ! I0127 12:35:39.843688       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0127 12:36:54.258085    9948 command_runner.go:130] ! I0127 12:35:39.853521       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0127 12:36:54.258113    9948 command_runner.go:130] ! I0127 12:35:39.853736       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0127 12:36:54.258113    9948 command_runner.go:130] ! I0127 12:35:39.854572       1 instance.go:233] Using reconciler: lease
	I0127 12:36:54.258113    9948 command_runner.go:130] ! I0127 12:35:39.914509       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0127 12:36:54.258113    9948 command_runner.go:130] ! W0127 12:35:39.914792       1 genericapiserver.go:767] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.258113    9948 command_runner.go:130] ! I0127 12:35:40.232206       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0127 12:36:54.258113    9948 command_runner.go:130] ! I0127 12:35:40.232893       1 apis.go:106] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0127 12:36:54.258113    9948 command_runner.go:130] ! I0127 12:35:40.488401       1 apis.go:106] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0127 12:36:54.258113    9948 command_runner.go:130] ! I0127 12:35:40.610998       1 apis.go:106] API group "resource.k8s.io" is not enabled, skipping.
	I0127 12:36:54.258113    9948 command_runner.go:130] ! I0127 12:35:40.646097       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0127 12:36:54.258113    9948 command_runner.go:130] ! W0127 12:35:40.646401       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.258113    9948 command_runner.go:130] ! W0127 12:35:40.646556       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:54.258651    9948 command_runner.go:130] ! I0127 12:35:40.647499       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0127 12:36:54.258651    9948 command_runner.go:130] ! W0127 12:35:40.647580       1 genericapiserver.go:767] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.258651    9948 command_runner.go:130] ! I0127 12:35:40.648520       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0127 12:36:54.258697    9948 command_runner.go:130] ! I0127 12:35:40.649666       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0127 12:36:54.258697    9948 command_runner.go:130] ! W0127 12:35:40.649756       1 genericapiserver.go:767] Skipping API autoscaling/v2beta1 because it has no resources.
	I0127 12:36:54.258697    9948 command_runner.go:130] ! W0127 12:35:40.649766       1 genericapiserver.go:767] Skipping API autoscaling/v2beta2 because it has no resources.
	I0127 12:36:54.258745    9948 command_runner.go:130] ! I0127 12:35:40.651998       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0127 12:36:54.258745    9948 command_runner.go:130] ! W0127 12:35:40.652100       1 genericapiserver.go:767] Skipping API batch/v1beta1 because it has no resources.
	I0127 12:36:54.258745    9948 command_runner.go:130] ! I0127 12:35:40.653327       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0127 12:36:54.258792    9948 command_runner.go:130] ! W0127 12:35:40.653629       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.653645       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! I0127 12:35:40.654270       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.654362       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.654371       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1alpha2 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! I0127 12:35:40.655349       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.655494       1 genericapiserver.go:767] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! I0127 12:35:40.657969       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.658067       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.658077       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! I0127 12:35:40.658845       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.658940       1 genericapiserver.go:767] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.658951       1 genericapiserver.go:767] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! I0127 12:35:40.660043       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.660172       1 genericapiserver.go:767] Skipping API policy/v1beta1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! I0127 12:35:40.662431       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.662519       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.662531       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! I0127 12:35:40.663022       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.663153       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.663165       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! I0127 12:35:40.666344       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.666495       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.666521       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! I0127 12:35:40.668345       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.668516       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta3 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.668527       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.668531       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! I0127 12:35:40.673502       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.673587       1 genericapiserver.go:767] Skipping API apps/v1beta2 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.673597       1 genericapiserver.go:767] Skipping API apps/v1beta1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! I0127 12:35:40.676193       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.676284       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.676294       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! I0127 12:35:40.677186       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.677276       1 genericapiserver.go:767] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.258821    9948 command_runner.go:130] ! I0127 12:35:40.688978       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0127 12:36:54.258821    9948 command_runner.go:130] ! W0127 12:35:40.689072       1 genericapiserver.go:767] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:54.259365    9948 command_runner.go:130] ! I0127 12:35:41.320439       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 12:36:54.259365    9948 command_runner.go:130] ! I0127 12:35:41.320849       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:54.259441    9948 command_runner.go:130] ! I0127 12:35:41.321234       1 secure_serving.go:213] Serving securely on [::]:8443
	I0127 12:36:54.259441    9948 command_runner.go:130] ! I0127 12:35:41.321512       1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0127 12:36:54.259441    9948 command_runner.go:130] ! I0127 12:35:41.324372       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:54.259441    9948 command_runner.go:130] ! I0127 12:35:41.325924       1 controller.go:119] Starting legacy_token_tracking_controller
	I0127 12:36:54.259542    9948 command_runner.go:130] ! I0127 12:35:41.326193       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0127 12:36:54.259542    9948 command_runner.go:130] ! I0127 12:35:41.327573       1 remote_available_controller.go:411] Starting RemoteAvailability controller
	I0127 12:36:54.259542    9948 command_runner.go:130] ! I0127 12:35:41.328217       1 cache.go:32] Waiting for caches to sync for RemoteAvailability controller
	I0127 12:36:54.259542    9948 command_runner.go:130] ! I0127 12:35:41.328319       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.329060       1 cluster_authentication_trust_controller.go:462] Starting cluster_authentication_trust_controller controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.329095       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.329225       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.329996       1 controller.go:78] Starting OpenAPI AggregationController
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.330057       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.330085       1 apf_controller.go:377] Starting API Priority and Fairness config controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.330333       1 apiservice_controller.go:100] Starting APIServiceRegistrationController
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.330379       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.331391       1 customresource_discovery_controller.go:292] Starting DiscoveryController
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.331485       1 dynamic_serving_content.go:135] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.327929       1 local_available_controller.go:156] Starting LocalAvailability controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.333671       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.333703       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.333958       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.335863       1 controller.go:142] Starting OpenAPI controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.336704       1 controller.go:90] Starting OpenAPI V3 controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.336831       1 naming_controller.go:294] Starting NamingConditionController
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.337057       1 establishing_controller.go:81] Starting EstablishingController
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.337215       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.337324       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.337408       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.327968       1 aggregator.go:169] waiting for initial CRD sync...
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.387084       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.387441       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.450926       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.451366       1 policy_source.go:240] refreshing policies
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.488750       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.488990       1 aggregator.go:171] initial CRD sync complete...
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.489245       1 autoregister_controller.go:144] Starting autoregister controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.489480       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.489653       1 cache.go:39] Caches are synced for autoregister controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.499151       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.527390       1 shared_informer.go:320] Caches are synced for configmaps
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.528625       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.529892       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.530639       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0127 12:36:54.259595    9948 command_runner.go:130] ! I0127 12:35:41.531604       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0127 12:36:54.260117    9948 command_runner.go:130] ! I0127 12:35:41.531638       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0127 12:36:54.260117    9948 command_runner.go:130] ! I0127 12:35:41.534721       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0127 12:36:54.260174    9948 command_runner.go:130] ! I0127 12:35:41.540933       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0127 12:36:54.260174    9948 command_runner.go:130] ! I0127 12:35:41.545944       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0127 12:36:54.260174    9948 command_runner.go:130] ! I0127 12:35:42.357869       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0127 12:36:54.260174    9948 command_runner.go:130] ! I0127 12:35:42.374307       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0127 12:36:54.260174    9948 command_runner.go:130] ! W0127 12:35:43.074223       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.29.198.106]
	I0127 12:36:54.260258    9948 command_runner.go:130] ! I0127 12:35:43.075938       1 controller.go:615] quota admission added evaluator for: endpoints
	I0127 12:36:54.260258    9948 command_runner.go:130] ! I0127 12:35:43.085006       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0127 12:36:54.260258    9948 command_runner.go:130] ! I0127 12:35:44.603084       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0127 12:36:54.260258    9948 command_runner.go:130] ! I0127 12:35:44.989601       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0127 12:36:54.260258    9948 command_runner.go:130] ! I0127 12:35:45.141450       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0127 12:36:54.260258    9948 command_runner.go:130] ! I0127 12:35:45.327075       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 12:36:54.260258    9948 command_runner.go:130] ! I0127 12:35:45.338333       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 12:36:54.267730    9948 logs.go:123] Gathering logs for etcd [0ef2a3b50bae] ...
	I0127 12:36:54.267790    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ef2a3b50bae"
	I0127 12:36:54.292327    9948 command_runner.go:130] ! {"level":"warn","ts":"2025-01-27T12:35:38.248296Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0127 12:36:54.292475    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.248523Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.29.198.106:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.29.198.106:2380","--initial-cluster=multinode-659000=https://172.29.198.106:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.29.198.106:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.29.198.106:2380","--name=multinode-659000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0127 12:36:54.292475    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.249804Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0127 12:36:54.292559    9948 command_runner.go:130] ! {"level":"warn","ts":"2025-01-27T12:35:38.249933Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0127 12:36:54.292559    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.249951Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://172.29.198.106:2380"]}
	I0127 12:36:54.292648    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.250358Z","caller":"embed/etcd.go:497","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0127 12:36:54.292648    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.255871Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.29.198.106:2379"]}
	I0127 12:36:54.292793    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.258341Z","caller":"embed/etcd.go:311","msg":"starting an etcd server","etcd-version":"3.5.16","git-sha":"f20bbad","go-version":"go1.22.7","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-659000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.29.198.106:2380"],"listen-peer-urls":["https://172.29.198.106:2380"],"advertise-client-urls":["https://172.29.198.106:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.198.106:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initi
al-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0127 12:36:54.292868    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.282453Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"23.428079ms"}
	I0127 12:36:54.292868    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.322950Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0127 12:36:54.292938    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.352706Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"d020e240c474bd89","local-member-id":"925e6945be3a5b5b","commit-index":2090}
	I0127 12:36:54.292938    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.354002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b switched to configuration voters=()"}
	I0127 12:36:54.292938    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.354079Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b became follower at term 2"}
	I0127 12:36:54.292938    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.354103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 925e6945be3a5b5b [peers: [], term: 2, commit: 2090, applied: 0, lastindex: 2090, lastterm: 2]"}
	I0127 12:36:54.292938    9948 command_runner.go:130] ! {"level":"warn","ts":"2025-01-27T12:35:38.367343Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0127 12:36:54.292938    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.371532Z","caller":"mvcc/kvstore.go:346","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1388}
	I0127 12:36:54.292938    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.377112Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":1808}
	I0127 12:36:54.292938    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.386775Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0127 12:36:54.293168    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.395908Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"925e6945be3a5b5b","timeout":"7s"}
	I0127 12:36:54.293168    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.396497Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"925e6945be3a5b5b"}
	I0127 12:36:54.293168    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.396684Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"925e6945be3a5b5b","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	I0127 12:36:54.293234    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.396970Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	I0127 12:36:54.293234    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.399309Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0127 12:36:54.293374    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.401105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b switched to configuration voters=(10546983125613435739)"}
	I0127 12:36:54.293446    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.400045Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0127 12:36:54.293446    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.404834Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0127 12:36:54.293533    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.404888Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0127 12:36:54.293595    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.405566Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d020e240c474bd89","local-member-id":"925e6945be3a5b5b","added-peer-id":"925e6945be3a5b5b","added-peer-peer-urls":["https://172.29.204.17:2380"]}
	I0127 12:36:54.293595    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.405716Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d020e240c474bd89","local-member-id":"925e6945be3a5b5b","cluster-version":"3.5"}
	I0127 12:36:54.293595    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.405754Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0127 12:36:54.293680    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.407643Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0127 12:36:54.293747    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.408091Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"925e6945be3a5b5b","initial-advertise-peer-urls":["https://172.29.198.106:2380"],"listen-peer-urls":["https://172.29.198.106:2380"],"advertise-client-urls":["https://172.29.198.106:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.198.106:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0127 12:36:54.293747    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.408386Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0127 12:36:54.293875    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.408686Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"172.29.198.106:2380"}
	I0127 12:36:54.293875    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.408809Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"172.29.198.106:2380"}
	I0127 12:36:54.293927    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.355207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b is starting a new election at term 2"}
	I0127 12:36:54.293927    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.355615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b became pre-candidate at term 2"}
	I0127 12:36:54.293962    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.355770Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b received MsgPreVoteResp from 925e6945be3a5b5b at term 2"}
	I0127 12:36:54.293962    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.355926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b became candidate at term 3"}
	I0127 12:36:54.293962    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.356088Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b received MsgVoteResp from 925e6945be3a5b5b at term 3"}
	I0127 12:36:54.293962    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.356235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b became leader at term 3"}
	I0127 12:36:54.294034    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.356449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 925e6945be3a5b5b elected leader 925e6945be3a5b5b at term 3"}
	I0127 12:36:54.294034    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.368540Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"925e6945be3a5b5b","local-member-attributes":"{Name:multinode-659000 ClientURLs:[https://172.29.198.106:2379]}","request-path":"/0/members/925e6945be3a5b5b/attributes","cluster-id":"d020e240c474bd89","publish-timeout":"7s"}
	I0127 12:36:54.294034    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.369045Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0127 12:36:54.294093    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.371833Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0127 12:36:54.294093    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.372238Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0127 12:36:54.294093    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.374158Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0127 12:36:54.294147    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.383680Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0127 12:36:54.294147    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.391404Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0127 12:36:54.294192    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.392982Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.29.198.106:2379"}
	I0127 12:36:54.294215    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.399505Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0127 12:36:54.302336    9948 logs.go:123] Gathering logs for kube-scheduler [ed51c7eaa966] ...
	I0127 12:36:54.302336    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed51c7eaa966"
	I0127 12:36:54.329330    9948 command_runner.go:130] ! I0127 12:35:39.285954       1 serving.go:386] Generated self-signed cert in-memory
	I0127 12:36:54.329912    9948 command_runner.go:130] ! W0127 12:35:41.361191       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0127 12:36:54.329912    9948 command_runner.go:130] ! W0127 12:35:41.363231       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0127 12:36:54.329986    9948 command_runner.go:130] ! W0127 12:35:41.363467       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0127 12:36:54.330006    9948 command_runner.go:130] ! W0127 12:35:41.363598       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 12:36:54.330006    9948 command_runner.go:130] ! I0127 12:35:41.458309       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 12:36:54.330105    9948 command_runner.go:130] ! I0127 12:35:41.458594       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:54.330132    9948 command_runner.go:130] ! I0127 12:35:41.465036       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 12:36:54.330132    9948 command_runner.go:130] ! I0127 12:35:41.465587       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0127 12:36:54.330132    9948 command_runner.go:130] ! I0127 12:35:41.466480       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:36:54.330132    9948 command_runner.go:130] ! I0127 12:35:41.466554       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:54.330132    9948 command_runner.go:130] ! I0127 12:35:41.567642       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:36:54.332685    9948 logs.go:123] Gathering logs for kube-scheduler [a16e06a03860] ...
	I0127 12:36:54.332685    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a16e06a03860"
	I0127 12:36:54.365785    9948 command_runner.go:130] ! I0127 12:11:54.280431       1 serving.go:386] Generated self-signed cert in-memory
	I0127 12:36:54.365869    9948 command_runner.go:130] ! W0127 12:11:55.581187       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0127 12:36:54.365869    9948 command_runner.go:130] ! W0127 12:11:55.581309       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0127 12:36:54.365997    9948 command_runner.go:130] ! W0127 12:11:55.581382       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0127 12:36:54.365997    9948 command_runner.go:130] ! W0127 12:11:55.581390       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 12:36:54.365997    9948 command_runner.go:130] ! I0127 12:11:55.694969       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 12:36:54.365997    9948 command_runner.go:130] ! I0127 12:11:55.695193       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:54.365997    9948 command_runner.go:130] ! I0127 12:11:55.700077       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0127 12:36:54.365997    9948 command_runner.go:130] ! I0127 12:11:55.700446       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 12:36:54.365997    9948 command_runner.go:130] ! I0127 12:11:55.700992       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:36:54.365997    9948 command_runner.go:130] ! I0127 12:11:55.701410       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:54.365997    9948 command_runner.go:130] ! W0127 12:11:55.715521       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:54.365997    9948 command_runner.go:130] ! E0127 12:11:55.717196       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.365997    9948 command_runner.go:130] ! W0127 12:11:55.717649       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0127 12:36:54.365997    9948 command_runner.go:130] ! E0127 12:11:55.717921       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.365997    9948 command_runner.go:130] ! W0127 12:11:55.718583       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0127 12:36:54.365997    9948 command_runner.go:130] ! E0127 12:11:55.718820       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.365997    9948 command_runner.go:130] ! W0127 12:11:55.728298       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0127 12:36:54.365997    9948 command_runner.go:130] ! E0127 12:11:55.728648       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0127 12:36:54.365997    9948 command_runner.go:130] ! W0127 12:11:55.729000       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0127 12:36:54.365997    9948 command_runner.go:130] ! E0127 12:11:55.729243       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.365997    9948 command_runner.go:130] ! W0127 12:11:55.729633       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0127 12:36:54.365997    9948 command_runner.go:130] ! E0127 12:11:55.730380       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.365997    9948 command_runner.go:130] ! W0127 12:11:55.729677       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0127 12:36:54.366530    9948 command_runner.go:130] ! E0127 12:11:55.730837       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.366585    9948 command_runner.go:130] ! W0127 12:11:55.729713       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0127 12:36:54.366585    9948 command_runner.go:130] ! W0127 12:11:55.729749       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:54.366585    9948 command_runner.go:130] ! E0127 12:11:55.731479       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.366696    9948 command_runner.go:130] ! W0127 12:11:55.729782       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:54.366723    9948 command_runner.go:130] ! E0127 12:11:55.732242       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.366723    9948 command_runner.go:130] ! W0127 12:11:55.729811       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:54.366723    9948 command_runner.go:130] ! E0127 12:11:55.734240       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.366723    9948 command_runner.go:130] ! E0127 12:11:55.734704       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.366723    9948 command_runner.go:130] ! W0127 12:11:55.738077       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0127 12:36:54.366723    9948 command_runner.go:130] ! E0127 12:11:55.738873       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.366723    9948 command_runner.go:130] ! W0127 12:11:55.739202       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0127 12:36:54.366723    9948 command_runner.go:130] ! E0127 12:11:55.739366       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.366723    9948 command_runner.go:130] ! W0127 12:11:55.739719       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0127 12:36:54.366723    9948 command_runner.go:130] ! E0127 12:11:55.739865       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.366723    9948 command_runner.go:130] ! W0127 12:11:55.740221       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0127 12:36:54.366723    9948 command_runner.go:130] ! E0127 12:11:55.740378       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.366723    9948 command_runner.go:130] ! W0127 12:11:55.740608       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:54.366723    9948 command_runner.go:130] ! E0127 12:11:55.740761       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.366723    9948 command_runner.go:130] ! W0127 12:11:56.556598       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0127 12:36:54.366723    9948 command_runner.go:130] ! E0127 12:11:56.557622       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.367252    9948 command_runner.go:130] ! W0127 12:11:56.595830       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:54.367367    9948 command_runner.go:130] ! E0127 12:11:56.596047       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.367367    9948 command_runner.go:130] ! W0127 12:11:56.691826       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0127 12:36:54.367452    9948 command_runner.go:130] ! E0127 12:11:56.691909       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0127 12:36:54.367545    9948 command_runner.go:130] ! W0127 12:11:56.806048       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:54.367545    9948 command_runner.go:130] ! E0127 12:11:56.806109       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.367608    9948 command_runner.go:130] ! W0127 12:11:56.846817       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0127 12:36:54.367631    9948 command_runner.go:130] ! E0127 12:11:56.847194       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.367690    9948 command_runner.go:130] ! W0127 12:11:56.871314       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0127 12:36:54.367725    9948 command_runner.go:130] ! E0127 12:11:56.872178       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.367725    9948 command_runner.go:130] ! W0127 12:11:56.887386       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0127 12:36:54.367797    9948 command_runner.go:130] ! E0127 12:11:56.887549       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.367797    9948 command_runner.go:130] ! W0127 12:11:56.918642       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0127 12:36:54.367897    9948 command_runner.go:130] ! E0127 12:11:56.919135       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.367897    9948 command_runner.go:130] ! W0127 12:11:57.039216       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:54.367897    9948 command_runner.go:130] ! E0127 12:11:57.039707       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.367897    9948 command_runner.go:130] ! W0127 12:11:57.055169       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0127 12:36:54.367897    9948 command_runner.go:130] ! E0127 12:11:57.055233       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.367897    9948 command_runner.go:130] ! W0127 12:11:57.106656       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0127 12:36:54.367897    9948 command_runner.go:130] ! E0127 12:11:57.106828       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.367897    9948 command_runner.go:130] ! W0127 12:11:57.214186       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:54.367897    9948 command_runner.go:130] ! E0127 12:11:57.214290       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.367897    9948 command_runner.go:130] ! W0127 12:11:57.298150       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0127 12:36:54.367897    9948 command_runner.go:130] ! E0127 12:11:57.298337       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.367897    9948 command_runner.go:130] ! W0127 12:11:57.310098       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0127 12:36:54.367897    9948 command_runner.go:130] ! E0127 12:11:57.310312       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.367897    9948 command_runner.go:130] ! W0127 12:11:57.312117       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:54.367897    9948 command_runner.go:130] ! E0127 12:11:57.312192       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.368419    9948 command_runner.go:130] ! W0127 12:11:57.321525       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0127 12:36:54.368460    9948 command_runner.go:130] ! E0127 12:11:57.321832       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:54.368460    9948 command_runner.go:130] ! I0127 12:11:59.701790       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:36:54.368513    9948 command_runner.go:130] ! I0127 12:33:15.443053       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0127 12:36:54.368513    9948 command_runner.go:130] ! I0127 12:33:15.443143       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0127 12:36:54.368513    9948 command_runner.go:130] ! I0127 12:33:15.452458       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 12:36:54.368513    9948 command_runner.go:130] ! E0127 12:33:15.487412       1 run.go:72] "command failed" err="finished without leader elect"
	I0127 12:36:54.379298    9948 logs.go:123] Gathering logs for kube-controller-manager [e07a66f8f619] ...
	I0127 12:36:54.379298    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e07a66f8f619"
	I0127 12:36:54.405309    9948 command_runner.go:130] ! I0127 12:11:53.668834       1 serving.go:386] Generated self-signed cert in-memory
	I0127 12:36:54.405309    9948 command_runner.go:130] ! I0127 12:11:53.986868       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0127 12:36:54.405309    9948 command_runner.go:130] ! I0127 12:11:53.987309       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:54.405309    9948 command_runner.go:130] ! I0127 12:11:53.989401       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0127 12:36:54.405309    9948 command_runner.go:130] ! I0127 12:11:53.990012       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:54.405309    9948 command_runner.go:130] ! I0127 12:11:53.990187       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:53.990322       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.581695       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.581741       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.615284       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.615497       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.615545       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.626456       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.626896       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.626952       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.636784       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.636866       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.637077       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.637108       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.649619       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.649750       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.649765       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.650223       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.650457       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.682646       1 shared_informer.go:320] Caches are synced for tokens
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.684061       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.684098       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.698781       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.699001       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.699050       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.699060       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.720187       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.720450       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.725202       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.736652       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.737667       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.738017       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.758863       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0127 12:36:54.406334    9948 command_runner.go:130] ! I0127 12:11:58.759137       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:58.759589       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:58.759751       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:58.778737       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:58.779301       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:58.794263       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:58.805098       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:58.805155       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:58.805917       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:58.889766       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:58.889864       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:58.889880       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.169736       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.169792       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.169804       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.292507       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.292665       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.292680       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.451231       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.451328       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.451387       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.451649       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.594702       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.594829       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.595498       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.595889       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.744969       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.745617       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.745871       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.892444       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.892907       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:11:59.893093       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.136328       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.136634       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.136654       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.136681       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.425858       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.426027       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.426047       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.426160       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.426327       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.426356       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.685414       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.685471       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.685482       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.841490       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.841888       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.841953       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.888027       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.888135       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.888174       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.889767       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.889893       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.889957       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.890020       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.890047       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.890072       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.890079       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.890101       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.890256       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:54.407315    9948 command_runner.go:130] ! I0127 12:12:00.890391       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.042988       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.043513       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.043602       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.043761       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0127 12:36:54.408329    9948 command_runner.go:130] ! W0127 12:12:01.189051       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.192613       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.192663       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.193062       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.193147       1 shared_informer.go:313] Waiting for caches to sync for node
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.493812       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.493885       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.493919       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494208       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494371       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494391       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494413       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494456       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494473       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494487       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494531       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494547       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494617       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494687       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494717       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494749       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494763       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494781       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494815       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.494890       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.495196       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.495268       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.495404       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.495519       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.640900       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.641423       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.641492       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.789671       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.790209       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.790224       1 shared_informer.go:313] Waiting for caches to sync for job
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.939873       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.940295       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:01.940375       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:02.099155       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:02.099654       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:02.099741       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:02.240427       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:02.240688       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:02.240725       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0127 12:36:54.408329    9948 command_runner.go:130] ! I0127 12:12:02.390343       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.390438       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.390450       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.539643       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.539766       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.539778       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.691835       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.691969       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.739108       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.739143       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.739157       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.739400       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.739775       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.740069       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.890126       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.890235       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:02.890247       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.040125       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.040770       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.040983       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.063768       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.092877       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.093448       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.110720       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000\" does not exist"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.126986       1 shared_informer.go:320] Caches are synced for endpoint
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.127087       1 shared_informer.go:320] Caches are synced for taint
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.127203       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.127313       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.127524       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.137503       1 shared_informer.go:320] Caches are synced for service account
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.137554       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.138208       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.138217       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.138352       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.141127       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.141405       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.141415       1 shared_informer.go:320] Caches are synced for TTL
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.141424       1 shared_informer.go:320] Caches are synced for ephemeral
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.141607       1 shared_informer.go:320] Caches are synced for daemon sets
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.141617       1 shared_informer.go:320] Caches are synced for stateful set
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.142442       1 shared_informer.go:320] Caches are synced for cronjob
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.146511       1 shared_informer.go:320] Caches are synced for persistent volume
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.150765       1 shared_informer.go:320] Caches are synced for expand
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.152122       1 shared_informer.go:320] Caches are synced for PVC protection
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.160180       1 shared_informer.go:320] Caches are synced for GC
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.164570       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.170520       1 shared_informer.go:320] Caches are synced for namespace
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.185040       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.186131       1 shared_informer.go:320] Caches are synced for HPA
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.188683       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0127 12:36:54.409313    9948 command_runner.go:130] ! I0127 12:12:03.191196       1 shared_informer.go:320] Caches are synced for crt configmap
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.192089       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.192497       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.192682       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.192862       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.193013       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.193030       1 shared_informer.go:320] Caches are synced for job
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.193151       1 shared_informer.go:320] Caches are synced for deployment
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.193982       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.194157       1 shared_informer.go:320] Caches are synced for node
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.194244       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.194281       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.194310       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.194318       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.194846       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.196614       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.197111       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.197095       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.199168       1 shared_informer.go:320] Caches are synced for disruption
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.200153       1 shared_informer.go:320] Caches are synced for attach detach
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.207229       1 shared_informer.go:320] Caches are synced for PV protection
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.214016       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-659000" podCIDRs=["10.244.0.0/24"]
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.214057       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.214083       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.216325       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:03.840748       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:04.356274       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="345.711056ms"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:04.454747       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="97.841105ms"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:04.534437       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="79.56576ms"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:04.576528       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="41.959673ms"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:04.576771       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="53.3µs"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:26.045035       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:26.074083       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:26.085407       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="109.3µs"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:26.129584       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="119.3µs"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:27.964629       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="49.302µs"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:28.020606       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="31.923176ms"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:28.020971       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="110.703µs"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:28.132341       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:12:29.790464       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:07.611410       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m02\" does not exist"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:07.630009       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-659000-m02" podCIDRs=["10.244.1.0/24"]
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:07.631297       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:07.631526       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:07.655401       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:07.883346       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:08.169505       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000-m02"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:08.255644       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:08.418223       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:17.811768       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:36.752543       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:36.753915       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:36.769807       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:38.199464       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:15:38.449749       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:16:02.550786       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="103.313802ms"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:16:02.585867       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="34.67067ms"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:16:02.586257       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="347.903µs"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:16:02.588870       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="48.6µs"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:16:05.434486       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="13.589639ms"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:16:05.435765       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="54.401µs"
	I0127 12:36:54.410312    9948 command_runner.go:130] ! I0127 12:16:05.890170       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="9.003392ms"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:16:05.890477       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="36.901µs"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:16:09.305780       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:16:33.434322       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:19:26.820887       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:19:54.916460       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:19:54.917420       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m03\" does not exist"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:19:54.965530       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-659000-m03" podCIDRs=["10.244.2.0/24"]
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:19:54.966061       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:19:54.966297       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:19:55.802981       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:19:56.378698       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:19:58.252320       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:19:58.280410       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:20:05.560777       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:20:25.959831       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:20:28.750598       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:20:28.751325       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:20:28.769163       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:20:33.279397       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:23:26.795899       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:24:32.956118       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:25:42.001288       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:28:32.628178       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:28:38.397672       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:28:38.399092       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:28:38.428451       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:28:43.510900       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:29:38.000555       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:30:52.866288       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:30:52.895359       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:30:58.140304       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:31:04.208510       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:31:04.209007       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m03\" does not exist"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:31:04.238560       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-659000-m03" podCIDRs=["10.244.3.0/24"]
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:31:04.238634       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! E0127 12:31:04.255963       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-659000-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-659000-m03" podCIDRs=["10.244.4.0/24"]
	I0127 12:36:54.411307    9948 command_runner.go:130] ! E0127 12:31:04.256068       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-659000-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! E0127 12:31:04.256109       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'multinode-659000-m03': failed to patch node CIDR: Node \"multinode-659000-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:31:04.256134       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:31:04.261242       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:31:04.513319       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:31:05.081710       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:31:08.523576       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:31:14.394811       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:31:22.407069       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:31:22.407472       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:31:22.419743       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:31:23.498434       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:33:08.544063       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:33:08.544656       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.411307    9948 command_runner.go:130] ! I0127 12:33:08.574301       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.412303    9948 command_runner.go:130] ! I0127 12:33:13.661256       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.431320    9948 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:36:54.431320    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 12:36:54.599259    9948 command_runner.go:130] > Name:               multinode-659000
	I0127 12:36:54.599259    9948 command_runner.go:130] > Roles:              control-plane
	I0127 12:36:54.599259    9948 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0127 12:36:54.599259    9948 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0127 12:36:54.599259    9948 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0127 12:36:54.599259    9948 command_runner.go:130] >                     kubernetes.io/hostname=multinode-659000
	I0127 12:36:54.599259    9948 command_runner.go:130] >                     kubernetes.io/os=linux
	I0127 12:36:54.599259    9948 command_runner.go:130] >                     minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	I0127 12:36:54.599259    9948 command_runner.go:130] >                     minikube.k8s.io/name=multinode-659000
	I0127 12:36:54.599259    9948 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0127 12:36:54.599259    9948 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_01_27T12_12_00_0700
	I0127 12:36:54.599259    9948 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0127 12:36:54.599259    9948 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0127 12:36:54.599259    9948 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0127 12:36:54.599259    9948 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0127 12:36:54.599259    9948 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0127 12:36:54.599259    9948 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0127 12:36:54.599259    9948 command_runner.go:130] > CreationTimestamp:  Mon, 27 Jan 2025 12:11:55 +0000
	I0127 12:36:54.599259    9948 command_runner.go:130] > Taints:             <none>
	I0127 12:36:54.599259    9948 command_runner.go:130] > Unschedulable:      false
	I0127 12:36:54.599259    9948 command_runner.go:130] > Lease:
	I0127 12:36:54.599259    9948 command_runner.go:130] >   HolderIdentity:  multinode-659000
	I0127 12:36:54.599259    9948 command_runner.go:130] >   AcquireTime:     <unset>
	I0127 12:36:54.599259    9948 command_runner.go:130] >   RenewTime:       Mon, 27 Jan 2025 12:36:52 +0000
	I0127 12:36:54.599259    9948 command_runner.go:130] > Conditions:
	I0127 12:36:54.599259    9948 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0127 12:36:54.599259    9948 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0127 12:36:54.599259    9948 command_runner.go:130] >   MemoryPressure   False   Mon, 27 Jan 2025 12:36:32 +0000   Mon, 27 Jan 2025 12:11:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0127 12:36:54.599259    9948 command_runner.go:130] >   DiskPressure     False   Mon, 27 Jan 2025 12:36:32 +0000   Mon, 27 Jan 2025 12:11:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0127 12:36:54.599259    9948 command_runner.go:130] >   PIDPressure      False   Mon, 27 Jan 2025 12:36:32 +0000   Mon, 27 Jan 2025 12:11:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Ready            True    Mon, 27 Jan 2025 12:36:32 +0000   Mon, 27 Jan 2025 12:36:32 +0000   KubeletReady                 kubelet is posting ready status
	I0127 12:36:54.600252    9948 command_runner.go:130] > Addresses:
	I0127 12:36:54.600252    9948 command_runner.go:130] >   InternalIP:  172.29.198.106
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Hostname:    multinode-659000
	I0127 12:36:54.600252    9948 command_runner.go:130] > Capacity:
	I0127 12:36:54.600252    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:54.600252    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:54.600252    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:54.600252    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:54.600252    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:54.600252    9948 command_runner.go:130] > Allocatable:
	I0127 12:36:54.600252    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:54.600252    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:54.600252    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:54.600252    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:54.600252    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:54.600252    9948 command_runner.go:130] > System Info:
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Machine ID:                 312902fc96b948148d51eecf097c4a9d
	I0127 12:36:54.600252    9948 command_runner.go:130] >   System UUID:                be6234aa-9e29-bb41-8165-59b265a4d7d0
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Boot ID:                    058425a5-0652-4c5c-a517-2369b8cac13d
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Kernel Version:             5.10.207
	I0127 12:36:54.600252    9948 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Operating System:           linux
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Architecture:               amd64
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0127 12:36:54.600252    9948 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0127 12:36:54.600252    9948 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0127 12:36:54.600252    9948 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0127 12:36:54.600252    9948 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0127 12:36:54.600252    9948 command_runner.go:130] >   default                     busybox-58667487b6-2jq9j                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0127 12:36:54.600252    9948 command_runner.go:130] >   kube-system                 coredns-668d6bf9bc-2qw6w                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     24m
	I0127 12:36:54.600252    9948 command_runner.go:130] >   kube-system                 etcd-multinode-659000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         72s
	I0127 12:36:54.600252    9948 command_runner.go:130] >   kube-system                 kindnet-z2hqq                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	I0127 12:36:54.600252    9948 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-659000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         72s
	I0127 12:36:54.600252    9948 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-659000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0127 12:36:54.600252    9948 command_runner.go:130] >   kube-system                 kube-proxy-s46mv                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0127 12:36:54.600252    9948 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-659000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         24m
	I0127 12:36:54.600252    9948 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0127 12:36:54.600252    9948 command_runner.go:130] > Allocated resources:
	I0127 12:36:54.600252    9948 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Resource           Requests     Limits
	I0127 12:36:54.600252    9948 command_runner.go:130] >   --------           --------     ------
	I0127 12:36:54.600252    9948 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0127 12:36:54.600252    9948 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0127 12:36:54.600252    9948 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0127 12:36:54.600252    9948 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0127 12:36:54.600252    9948 command_runner.go:130] > Events:
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Type     Reason                   Age                From             Message
	I0127 12:36:54.600252    9948 command_runner.go:130] >   ----     ------                   ----               ----             -------
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   Starting                 24m                kube-proxy       
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   Starting                 69s                kube-proxy       
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   Starting                 25m                kubelet          Starting kubelet.
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   NodeHasSufficientMemory  25m (x8 over 25m)  kubelet          Node multinode-659000 status is now: NodeHasSufficientMemory
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    25m (x8 over 25m)  kubelet          Node multinode-659000 status is now: NodeHasNoDiskPressure
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   NodeHasSufficientPID     25m (x7 over 25m)  kubelet          Node multinode-659000 status is now: NodeHasSufficientPID
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    24m                kubelet          Node multinode-659000 status is now: NodeHasNoDiskPressure
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   NodeHasSufficientMemory  24m                kubelet          Node multinode-659000 status is now: NodeHasSufficientMemory
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   NodeHasSufficientPID     24m                kubelet          Node multinode-659000 status is now: NodeHasSufficientPID
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   Starting                 24m                kubelet          Starting kubelet.
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   RegisteredNode           24m                node-controller  Node multinode-659000 event: Registered Node multinode-659000 in Controller
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   NodeReady                24m                kubelet          Node multinode-659000 status is now: NodeReady
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   Starting                 78s                kubelet          Starting kubelet.
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   NodeHasSufficientMemory  78s (x8 over 78s)  kubelet          Node multinode-659000 status is now: NodeHasSufficientMemory
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    78s (x8 over 78s)  kubelet          Node multinode-659000 status is now: NodeHasNoDiskPressure
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   NodeHasSufficientPID     78s (x7 over 78s)  kubelet          Node multinode-659000 status is now: NodeHasSufficientPID
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Warning  Rebooted                 73s                kubelet          Node multinode-659000 has been rebooted, boot id: 058425a5-0652-4c5c-a517-2369b8cac13d
	I0127 12:36:54.600252    9948 command_runner.go:130] >   Normal   RegisteredNode           70s                node-controller  Node multinode-659000 event: Registered Node multinode-659000 in Controller
	I0127 12:36:54.600252    9948 command_runner.go:130] > Name:               multinode-659000-m02
	I0127 12:36:54.600252    9948 command_runner.go:130] > Roles:              <none>
	I0127 12:36:54.600252    9948 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0127 12:36:54.600252    9948 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0127 12:36:54.600252    9948 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     kubernetes.io/hostname=multinode-659000-m02
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     kubernetes.io/os=linux
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     minikube.k8s.io/name=multinode-659000
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_01_27T12_15_08_0700
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0127 12:36:54.601251    9948 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0127 12:36:54.601251    9948 command_runner.go:130] > CreationTimestamp:  Mon, 27 Jan 2025 12:15:07 +0000
	I0127 12:36:54.601251    9948 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0127 12:36:54.601251    9948 command_runner.go:130] > Unschedulable:      false
	I0127 12:36:54.601251    9948 command_runner.go:130] > Lease:
	I0127 12:36:54.601251    9948 command_runner.go:130] >   HolderIdentity:  multinode-659000-m02
	I0127 12:36:54.601251    9948 command_runner.go:130] >   AcquireTime:     <unset>
	I0127 12:36:54.601251    9948 command_runner.go:130] >   RenewTime:       Mon, 27 Jan 2025 12:32:39 +0000
	I0127 12:36:54.601251    9948 command_runner.go:130] > Conditions:
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0127 12:36:54.601251    9948 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0127 12:36:54.601251    9948 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 27 Jan 2025 12:28:32 +0000   Mon, 27 Jan 2025 12:36:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:54.601251    9948 command_runner.go:130] >   DiskPressure     Unknown   Mon, 27 Jan 2025 12:28:32 +0000   Mon, 27 Jan 2025 12:36:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:54.601251    9948 command_runner.go:130] >   PIDPressure      Unknown   Mon, 27 Jan 2025 12:28:32 +0000   Mon, 27 Jan 2025 12:36:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Ready            Unknown   Mon, 27 Jan 2025 12:28:32 +0000   Mon, 27 Jan 2025 12:36:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:54.601251    9948 command_runner.go:130] > Addresses:
	I0127 12:36:54.601251    9948 command_runner.go:130] >   InternalIP:  172.29.199.129
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Hostname:    multinode-659000-m02
	I0127 12:36:54.601251    9948 command_runner.go:130] > Capacity:
	I0127 12:36:54.601251    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:54.601251    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:54.601251    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:54.601251    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:54.601251    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:54.601251    9948 command_runner.go:130] > Allocatable:
	I0127 12:36:54.601251    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:54.601251    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:54.601251    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:54.601251    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:54.601251    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:54.601251    9948 command_runner.go:130] > System Info:
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Machine ID:                 30ce15ff72904b54b07c49f3e2f28802
	I0127 12:36:54.601251    9948 command_runner.go:130] >   System UUID:                b6923799-fa1e-b54c-9340-50dd6a2378f5
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Boot ID:                    3308d183-ec79-4aeb-9d90-80d47cdbff63
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Kernel Version:             5.10.207
	I0127 12:36:54.601251    9948 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Operating System:           linux
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Architecture:               amd64
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0127 12:36:54.601251    9948 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0127 12:36:54.601251    9948 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0127 12:36:54.601251    9948 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0127 12:36:54.601251    9948 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0127 12:36:54.601251    9948 command_runner.go:130] >   default                     busybox-58667487b6-ktfxc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0127 12:36:54.601251    9948 command_runner.go:130] >   kube-system                 kindnet-n7vjl               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	I0127 12:36:54.601251    9948 command_runner.go:130] >   kube-system                 kube-proxy-pjhc8            0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0127 12:36:54.601251    9948 command_runner.go:130] > Allocated resources:
	I0127 12:36:54.601251    9948 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Resource           Requests   Limits
	I0127 12:36:54.601251    9948 command_runner.go:130] >   --------           --------   ------
	I0127 12:36:54.601251    9948 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0127 12:36:54.601251    9948 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0127 12:36:54.601251    9948 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0127 12:36:54.601251    9948 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0127 12:36:54.601251    9948 command_runner.go:130] > Events:
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0127 12:36:54.601251    9948 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-659000-m02 status is now: NodeHasSufficientMemory
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-659000-m02 status is now: NodeHasNoDiskPressure
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-659000-m02 status is now: NodeHasSufficientPID
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-659000-m02 event: Registered Node multinode-659000-m02 in Controller
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-659000-m02 status is now: NodeReady
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Normal  RegisteredNode           70s                node-controller  Node multinode-659000-m02 event: Registered Node multinode-659000-m02 in Controller
	I0127 12:36:54.601251    9948 command_runner.go:130] >   Normal  NodeNotReady             20s                node-controller  Node multinode-659000-m02 status is now: NodeNotReady
	I0127 12:36:54.601251    9948 command_runner.go:130] > Name:               multinode-659000-m03
	I0127 12:36:54.601251    9948 command_runner.go:130] > Roles:              <none>
	I0127 12:36:54.601251    9948 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     kubernetes.io/hostname=multinode-659000-m03
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     kubernetes.io/os=linux
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     minikube.k8s.io/name=multinode-659000
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_01_27T12_31_04_0700
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0127 12:36:54.601251    9948 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0127 12:36:54.601251    9948 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0127 12:36:54.602267    9948 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0127 12:36:54.602267    9948 command_runner.go:130] > CreationTimestamp:  Mon, 27 Jan 2025 12:31:04 +0000
	I0127 12:36:54.602267    9948 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0127 12:36:54.602267    9948 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0127 12:36:54.602267    9948 command_runner.go:130] > Unschedulable:      false
	I0127 12:36:54.602267    9948 command_runner.go:130] > Lease:
	I0127 12:36:54.602267    9948 command_runner.go:130] >   HolderIdentity:  multinode-659000-m03
	I0127 12:36:54.602267    9948 command_runner.go:130] >   AcquireTime:     <unset>
	I0127 12:36:54.602267    9948 command_runner.go:130] >   RenewTime:       Mon, 27 Jan 2025 12:32:15 +0000
	I0127 12:36:54.602267    9948 command_runner.go:130] > Conditions:
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0127 12:36:54.602267    9948 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0127 12:36:54.602267    9948 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 27 Jan 2025 12:31:22 +0000   Mon, 27 Jan 2025 12:33:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:54.602267    9948 command_runner.go:130] >   DiskPressure     Unknown   Mon, 27 Jan 2025 12:31:22 +0000   Mon, 27 Jan 2025 12:33:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:54.602267    9948 command_runner.go:130] >   PIDPressure      Unknown   Mon, 27 Jan 2025 12:31:22 +0000   Mon, 27 Jan 2025 12:33:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Ready            Unknown   Mon, 27 Jan 2025 12:31:22 +0000   Mon, 27 Jan 2025 12:33:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:54.602267    9948 command_runner.go:130] > Addresses:
	I0127 12:36:54.602267    9948 command_runner.go:130] >   InternalIP:  172.29.206.88
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Hostname:    multinode-659000-m03
	I0127 12:36:54.602267    9948 command_runner.go:130] > Capacity:
	I0127 12:36:54.602267    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:54.602267    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:54.602267    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:54.602267    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:54.602267    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:54.602267    9948 command_runner.go:130] > Allocatable:
	I0127 12:36:54.602267    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:54.602267    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:54.602267    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:54.602267    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:54.602267    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:54.602267    9948 command_runner.go:130] > System Info:
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Machine ID:                 5cd7b7bdbad940e0831e949f70fdd5af
	I0127 12:36:54.602267    9948 command_runner.go:130] >   System UUID:                bab0a90b-9ed8-ba42-88b9-fc6568ad7a53
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Boot ID:                    9d0d04c8-71ef-487a-a13c-e1de6463b3fe
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Kernel Version:             5.10.207
	I0127 12:36:54.602267    9948 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Operating System:           linux
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Architecture:               amd64
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0127 12:36:54.602267    9948 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0127 12:36:54.602267    9948 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0127 12:36:54.602267    9948 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0127 12:36:54.602267    9948 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0127 12:36:54.602267    9948 command_runner.go:130] >   kube-system                 kindnet-kpfjt       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      17m
	I0127 12:36:54.602267    9948 command_runner.go:130] >   kube-system                 kube-proxy-sk5js    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	I0127 12:36:54.602267    9948 command_runner.go:130] > Allocated resources:
	I0127 12:36:54.602267    9948 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Resource           Requests   Limits
	I0127 12:36:54.602267    9948 command_runner.go:130] >   --------           --------   ------
	I0127 12:36:54.602267    9948 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0127 12:36:54.602267    9948 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0127 12:36:54.602267    9948 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0127 12:36:54.602267    9948 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0127 12:36:54.602267    9948 command_runner.go:130] > Events:
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0127 12:36:54.602267    9948 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Normal  Starting                 5m46s                  kube-proxy       
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Normal  NodeHasSufficientMemory  17m (x2 over 17m)      kubelet          Node multinode-659000-m03 status is now: NodeHasSufficientMemory
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Normal  NodeHasSufficientPID     17m (x2 over 17m)      kubelet          Node multinode-659000-m03 status is now: NodeHasSufficientPID
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Normal  NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    17m (x2 over 17m)      kubelet          Node multinode-659000-m03 status is now: NodeHasNoDiskPressure
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-659000-m03 status is now: NodeReady
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Normal  Starting                 5m51s                  kubelet          Starting kubelet.
	I0127 12:36:54.602267    9948 command_runner.go:130] >   Normal  CIDRAssignmentFailed     5m50s                  cidrAllocator    Node multinode-659000-m03 status is now: CIDRAssignmentFailed
	I0127 12:36:54.603266    9948 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m50s (x2 over 5m50s)  kubelet          Node multinode-659000-m03 status is now: NodeHasSufficientMemory
	I0127 12:36:54.603266    9948 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m50s (x2 over 5m50s)  kubelet          Node multinode-659000-m03 status is now: NodeHasNoDiskPressure
	I0127 12:36:54.603266    9948 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m50s (x2 over 5m50s)  kubelet          Node multinode-659000-m03 status is now: NodeHasSufficientPID
	I0127 12:36:54.603266    9948 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m50s                  kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:54.603266    9948 command_runner.go:130] >   Normal  RegisteredNode           5m46s                  node-controller  Node multinode-659000-m03 event: Registered Node multinode-659000-m03 in Controller
	I0127 12:36:54.603266    9948 command_runner.go:130] >   Normal  NodeReady                5m32s                  kubelet          Node multinode-659000-m03 status is now: NodeReady
	I0127 12:36:54.603266    9948 command_runner.go:130] >   Normal  NodeNotReady             3m46s                  node-controller  Node multinode-659000-m03 status is now: NodeNotReady
	I0127 12:36:54.603266    9948 command_runner.go:130] >   Normal  RegisteredNode           70s                    node-controller  Node multinode-659000-m03 event: Registered Node multinode-659000-m03 in Controller
	I0127 12:36:54.612412    9948 logs.go:123] Gathering logs for kube-proxy [bbec7ccef7da] ...
	I0127 12:36:54.612412    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbec7ccef7da"
	I0127 12:36:54.652262    9948 command_runner.go:130] ! I0127 12:12:05.290111       1 server_linux.go:66] "Using iptables proxy"
	I0127 12:36:54.653105    9948 command_runner.go:130] ! E0127 12:12:05.321300       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0127 12:36:54.653105    9948 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0127 12:36:54.653179    9948 command_runner.go:130] ! 	add table ip kube-proxy
	I0127 12:36:54.653179    9948 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:54.653179    9948 command_runner.go:130] !  >
	I0127 12:36:54.653179    9948 command_runner.go:130] ! E0127 12:12:05.352123       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0127 12:36:54.653179    9948 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0127 12:36:54.653179    9948 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0127 12:36:54.653179    9948 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:54.653260    9948 command_runner.go:130] !  >
	I0127 12:36:54.653260    9948 command_runner.go:130] ! I0127 12:12:05.378799       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.204.17"]
	I0127 12:36:54.653310    9948 command_runner.go:130] ! E0127 12:12:05.378872       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 12:36:54.653310    9948 command_runner.go:130] ! I0127 12:12:05.470419       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 12:36:54.653310    9948 command_runner.go:130] ! I0127 12:12:05.470552       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 12:36:54.653373    9948 command_runner.go:130] ! I0127 12:12:05.470596       1 server_linux.go:170] "Using iptables Proxier"
	I0127 12:36:54.653398    9948 command_runner.go:130] ! I0127 12:12:05.475557       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 12:36:54.653428    9948 command_runner.go:130] ! I0127 12:12:05.476697       1 server.go:497] "Version info" version="v1.32.1"
	I0127 12:36:54.653428    9948 command_runner.go:130] ! I0127 12:12:05.476717       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:54.653428    9948 command_runner.go:130] ! I0127 12:12:05.478788       1 config.go:199] "Starting service config controller"
	I0127 12:36:54.653428    9948 command_runner.go:130] ! I0127 12:12:05.478844       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 12:36:54.653428    9948 command_runner.go:130] ! I0127 12:12:05.478916       1 config.go:105] "Starting endpoint slice config controller"
	I0127 12:36:54.653428    9948 command_runner.go:130] ! I0127 12:12:05.479018       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 12:36:54.653428    9948 command_runner.go:130] ! I0127 12:12:05.480053       1 config.go:329] "Starting node config controller"
	I0127 12:36:54.653428    9948 command_runner.go:130] ! I0127 12:12:05.480113       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 12:36:54.653428    9948 command_runner.go:130] ! I0127 12:12:05.579605       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 12:36:54.653428    9948 command_runner.go:130] ! I0127 12:12:05.579669       1 shared_informer.go:320] Caches are synced for service config
	I0127 12:36:54.653428    9948 command_runner.go:130] ! I0127 12:12:05.580463       1 shared_informer.go:320] Caches are synced for node config
	I0127 12:36:54.656163    9948 logs.go:123] Gathering logs for kube-controller-manager [8d4872cda28d] ...
	I0127 12:36:54.656219    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4872cda28d"
	I0127 12:36:54.685743    9948 command_runner.go:130] ! I0127 12:35:39.384985       1 serving.go:386] Generated self-signed cert in-memory
	I0127 12:36:54.685795    9948 command_runner.go:130] ! I0127 12:35:39.805936       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0127 12:36:54.685795    9948 command_runner.go:130] ! I0127 12:35:39.811206       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:54.685795    9948 command_runner.go:130] ! I0127 12:35:39.817632       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0127 12:36:54.685795    9948 command_runner.go:130] ! I0127 12:35:39.822579       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:54.685893    9948 command_runner.go:130] ! I0127 12:35:39.822772       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 12:36:54.685893    9948 command_runner.go:130] ! I0127 12:35:39.823033       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:54.685893    9948 command_runner.go:130] ! I0127 12:35:43.406116       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0127 12:36:54.685893    9948 command_runner.go:130] ! I0127 12:35:43.407249       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0127 12:36:54.685972    9948 command_runner.go:130] ! I0127 12:35:43.417237       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0127 12:36:54.685972    9948 command_runner.go:130] ! I0127 12:35:43.417292       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0127 12:36:54.685972    9948 command_runner.go:130] ! I0127 12:35:43.417300       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0127 12:36:54.685972    9948 command_runner.go:130] ! I0127 12:35:43.417307       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0127 12:36:54.685972    9948 command_runner.go:130] ! I0127 12:35:43.417506       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0127 12:36:54.685972    9948 command_runner.go:130] ! I0127 12:35:43.417534       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0127 12:36:54.685972    9948 command_runner.go:130] ! I0127 12:35:43.417553       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0127 12:36:54.686068    9948 command_runner.go:130] ! I0127 12:35:43.431621       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0127 12:36:54.686096    9948 command_runner.go:130] ! I0127 12:35:43.431964       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0127 12:36:54.686096    9948 command_runner.go:130] ! I0127 12:35:43.431989       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0127 12:36:54.686096    9948 command_runner.go:130] ! I0127 12:35:43.432010       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0127 12:36:54.686096    9948 command_runner.go:130] ! I0127 12:35:43.442961       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0127 12:36:54.686096    9948 command_runner.go:130] ! I0127 12:35:43.447308       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0127 12:36:54.686174    9948 command_runner.go:130] ! I0127 12:35:43.447396       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0127 12:36:54.686174    9948 command_runner.go:130] ! I0127 12:35:43.449412       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0127 12:36:54.686174    9948 command_runner.go:130] ! I0127 12:35:43.449608       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0127 12:36:54.686234    9948 command_runner.go:130] ! I0127 12:35:43.466583       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0127 12:36:54.686258    9948 command_runner.go:130] ! I0127 12:35:43.467490       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0127 12:36:54.686258    9948 command_runner.go:130] ! I0127 12:35:43.467508       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0127 12:36:54.686258    9948 command_runner.go:130] ! I0127 12:35:43.491988       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0127 12:36:54.686307    9948 command_runner.go:130] ! I0127 12:35:43.493672       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0127 12:36:54.686329    9948 command_runner.go:130] ! I0127 12:35:43.493698       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.498557       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.503953       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.503976       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.505729       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.505861       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.505872       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.509718       1 shared_informer.go:320] Caches are synced for tokens
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.510192       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.510208       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.510698       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.510714       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.512896       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.513433       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.513448       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.516433       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.516659       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.516671       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.524334       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.524358       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.524545       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.524557       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.534871       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.535028       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.535038       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.557745       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.557975       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.612615       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.612890       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.612906       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.616333       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.627087       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.627107       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0127 12:36:54.686372    9948 command_runner.go:130] ! I0127 12:35:43.692864       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0127 12:36:54.686907    9948 command_runner.go:130] ! I0127 12:35:43.692892       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0127 12:36:54.686907    9948 command_runner.go:130] ! I0127 12:35:43.693095       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0127 12:36:54.686969    9948 command_runner.go:130] ! I0127 12:35:43.700796       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0127 12:36:54.686969    9948 command_runner.go:130] ! I0127 12:35:43.703832       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0127 12:36:54.687017    9948 command_runner.go:130] ! I0127 12:35:43.703867       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0127 12:36:54.687043    9948 command_runner.go:130] ! I0127 12:35:43.713912       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0127 12:36:54.687043    9948 command_runner.go:130] ! I0127 12:35:43.714114       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0127 12:36:54.687043    9948 command_runner.go:130] ! I0127 12:35:43.714094       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0127 12:36:54.687043    9948 command_runner.go:130] ! I0127 12:35:43.714712       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0127 12:36:54.687043    9948 command_runner.go:130] ! I0127 12:35:43.714721       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0127 12:36:54.687107    9948 command_runner.go:130] ! I0127 12:35:43.721904       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0127 12:36:54.687131    9948 command_runner.go:130] ! I0127 12:35:43.722372       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0127 12:36:54.687177    9948 command_runner.go:130] ! I0127 12:35:43.723076       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0127 12:36:54.687177    9948 command_runner.go:130] ! I0127 12:35:43.739709       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.739886       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.739897       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.748074       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.748419       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.748432       1 shared_informer.go:313] Waiting for caches to sync for job
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.774085       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.774108       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.774196       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.814844       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.815383       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.815410       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! W0127 12:35:43.815432       1 shared_informer.go:597] resyncPeriod 17h46m45.188948257s is smaller than resyncCheckPeriod 20h1m58.14772951s and the informer has already started. Changing it to 20h1m58.14772951s
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.815487       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.815503       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.816077       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.816613       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.817053       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.817252       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.817373       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.817397       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! W0127 12:35:43.818105       1 shared_informer.go:597] resyncPeriod 12h27m56.377400464s is smaller than resyncCheckPeriod 20h1m58.14772951s and the informer has already started. Changing it to 20h1m58.14772951s
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.818223       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.818270       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.818295       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.818319       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.818336       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.818363       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.818376       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.818392       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.818410       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0127 12:36:54.687233    9948 command_runner.go:130] ! I0127 12:35:43.818442       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0127 12:36:54.687781    9948 command_runner.go:130] ! I0127 12:35:43.818764       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0127 12:36:54.687781    9948 command_runner.go:130] ! I0127 12:35:43.818778       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0127 12:36:54.687781    9948 command_runner.go:130] ! I0127 12:35:43.819843       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0127 12:36:54.687781    9948 command_runner.go:130] ! I0127 12:35:43.841955       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0127 12:36:54.687861    9948 command_runner.go:130] ! I0127 12:35:43.842559       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0127 12:36:54.687861    9948 command_runner.go:130] ! I0127 12:35:43.842587       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0127 12:36:54.687861    9948 command_runner.go:130] ! I0127 12:35:43.842995       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0127 12:36:54.687916    9948 command_runner.go:130] ! I0127 12:35:43.852026       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0127 12:36:54.687943    9948 command_runner.go:130] ! I0127 12:35:43.852211       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0127 12:36:54.687943    9948 command_runner.go:130] ! I0127 12:35:43.852253       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0127 12:36:54.687943    9948 command_runner.go:130] ! I0127 12:35:43.922876       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0127 12:36:54.687943    9948 command_runner.go:130] ! I0127 12:35:43.923019       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0127 12:36:54.687943    9948 command_runner.go:130] ! I0127 12:35:43.923033       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0127 12:36:54.688025    9948 command_runner.go:130] ! I0127 12:35:43.962858       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0127 12:36:54.688025    9948 command_runner.go:130] ! I0127 12:35:43.962895       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0127 12:36:54.688106    9948 command_runner.go:130] ! I0127 12:35:43.963021       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0127 12:36:54.688180    9948 command_runner.go:130] ! I0127 12:35:43.963037       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0127 12:36:54.688202    9948 command_runner.go:130] ! I0127 12:35:44.014798       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0127 12:36:54.688202    9948 command_runner.go:130] ! I0127 12:35:44.016438       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0127 12:36:54.688202    9948 command_runner.go:130] ! I0127 12:35:44.016458       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0127 12:36:54.688202    9948 command_runner.go:130] ! I0127 12:35:44.066881       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0127 12:36:54.688255    9948 command_runner.go:130] ! I0127 12:35:44.067018       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0127 12:36:54.688255    9948 command_runner.go:130] ! I0127 12:35:44.067064       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0127 12:36:54.688303    9948 command_runner.go:130] ! W0127 12:35:44.227808       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.236233       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.236429       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.236541       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.236556       1 shared_informer.go:313] Waiting for caches to sync for node
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.261051       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.261341       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.261374       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.314220       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.314319       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.314352       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.364392       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.364625       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.365833       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.365937       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.365975       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.365977       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.367697       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.368067       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.368427       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.369763       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.370290       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.370408       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.370568       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.412258       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.412274       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.412282       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.412297       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.412368       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.412379       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0127 12:36:54.688303    9948 command_runner.go:130] ! I0127 12:35:44.517568       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0127 12:36:54.688822    9948 command_runner.go:130] ! I0127 12:35:44.517771       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0127 12:36:54.688822    9948 command_runner.go:130] ! I0127 12:35:44.518074       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0127 12:36:54.688865    9948 command_runner.go:130] ! I0127 12:35:44.518288       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0127 12:36:54.688865    9948 command_runner.go:130] ! I0127 12:35:44.564449       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0127 12:36:54.688865    9948 command_runner.go:130] ! I0127 12:35:44.564546       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0127 12:36:54.688865    9948 command_runner.go:130] ! I0127 12:35:44.564657       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0127 12:36:54.688865    9948 command_runner.go:130] ! I0127 12:35:44.591265       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0127 12:36:54.688865    9948 command_runner.go:130] ! I0127 12:35:44.663628       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0127 12:36:54.688963    9948 command_runner.go:130] ! I0127 12:35:44.727283       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0127 12:36:54.688963    9948 command_runner.go:130] ! I0127 12:35:44.739370       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000\" does not exist"
	I0127 12:36:54.689018    9948 command_runner.go:130] ! I0127 12:35:44.739797       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m02\" does not exist"
	I0127 12:36:54.689042    9948 command_runner.go:130] ! I0127 12:35:44.740184       1 shared_informer.go:320] Caches are synced for crt configmap
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.740835       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m03\" does not exist"
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.747985       1 shared_informer.go:320] Caches are synced for GC
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.748593       1 shared_informer.go:320] Caches are synced for job
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.765439       1 shared_informer.go:320] Caches are synced for cronjob
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.765669       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.765982       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.766264       1 shared_informer.go:320] Caches are synced for expand
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.766617       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.767305       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.767462       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.768217       1 shared_informer.go:320] Caches are synced for stateful set
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.766681       1 shared_informer.go:320] Caches are synced for daemon sets
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.774887       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.775167       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.775269       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.775418       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.778028       1 shared_informer.go:320] Caches are synced for HPA
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.793610       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.793916       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.798773       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.805302       1 shared_informer.go:320] Caches are synced for PVC protection
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.805404       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.806234       1 shared_informer.go:320] Caches are synced for PV protection
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.811621       1 shared_informer.go:320] Caches are synced for ephemeral
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.813099       1 shared_informer.go:320] Caches are synced for TTL
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.813420       1 shared_informer.go:320] Caches are synced for namespace
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.813655       1 shared_informer.go:320] Caches are synced for deployment
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.815238       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.819201       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.819433       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.820006       1 shared_informer.go:320] Caches are synced for disruption
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.821695       1 shared_informer.go:320] Caches are synced for taint
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.821905       1 shared_informer.go:320] Caches are synced for endpoint
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.824479       1 shared_informer.go:320] Caches are synced for persistent volume
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.824852       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.825228       1 shared_informer.go:320] Caches are synced for attach detach
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.825784       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.836209       1 shared_informer.go:320] Caches are synced for service account
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.836651       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0127 12:36:54.689069    9948 command_runner.go:130] ! I0127 12:35:44.836969       1 shared_informer.go:320] Caches are synced for node
	I0127 12:36:54.689598    9948 command_runner.go:130] ! I0127 12:35:44.838015       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0127 12:36:54.689598    9948 command_runner.go:130] ! I0127 12:35:44.838049       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0127 12:36:54.689656    9948 command_runner.go:130] ! I0127 12:35:44.838058       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0127 12:36:54.689656    9948 command_runner.go:130] ! I0127 12:35:44.838065       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0127 12:36:54.689656    9948 command_runner.go:130] ! I0127 12:35:44.838200       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.689694    9948 command_runner.go:130] ! I0127 12:35:44.838217       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.689694    9948 command_runner.go:130] ! I0127 12:35:44.838227       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.689694    9948 command_runner.go:130] ! I0127 12:35:44.844908       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:36:54.689784    9948 command_runner.go:130] ! I0127 12:35:44.845551       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0127 12:36:54.689784    9948 command_runner.go:130] ! I0127 12:35:44.845777       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0127 12:36:54.689784    9948 command_runner.go:130] ! I0127 12:35:44.898551       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.689784    9948 command_runner.go:130] ! I0127 12:35:44.899476       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.689846    9948 command_runner.go:130] ! I0127 12:35:44.900201       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000"
	I0127 12:36:54.689867    9948 command_runner.go:130] ! I0127 12:35:44.900496       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000-m02"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:35:44.900687       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000-m03"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:35:44.901405       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:35:44.984858       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:35:45.000632       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="180.930208ms"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:35:45.003909       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="39.2µs"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:35:45.016382       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="195.414857ms"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:35:45.016698       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="108.2µs"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:35:54.975850       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:36:32.834093       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:36:32.834425       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:36:32.855708       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:36:34.928482       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:36:34.940809       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:36:34.955742       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:36:35.025877       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="15.32946ms"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:36:35.026020       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="30.3µs"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:36:40.041357       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:36:47.580904       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="50.8µs"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:36:48.616631       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="19.328909ms"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:36:48.617909       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="35.8µs"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:36:48.650691       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="23.414753ms"
	I0127 12:36:54.689892    9948 command_runner.go:130] ! I0127 12:36:48.651163       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="28.701µs"
	I0127 12:36:54.708049    9948 logs.go:123] Gathering logs for Docker ...
	I0127 12:36:54.708049    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0127 12:36:54.739374    9948 command_runner.go:130] > Jan 27 12:34:11 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0127 12:36:54.739374    9948 command_runner.go:130] > Jan 27 12:34:11 minikube cri-dockerd[223]: time="2025-01-27T12:34:11Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0127 12:36:54.739374    9948 command_runner.go:130] > Jan 27 12:34:11 minikube cri-dockerd[223]: time="2025-01-27T12:34:11Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0127 12:36:54.739374    9948 command_runner.go:130] > Jan 27 12:34:11 minikube cri-dockerd[223]: time="2025-01-27T12:34:11Z" level=info msg="Start docker client with request timeout 0s"
	I0127 12:36:54.739374    9948 command_runner.go:130] > Jan 27 12:34:11 minikube cri-dockerd[223]: time="2025-01-27T12:34:11Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0127 12:36:54.739374    9948 command_runner.go:130] > Jan 27 12:34:11 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:54.739374    9948 command_runner.go:130] > Jan 27 12:34:11 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0127 12:36:54.739374    9948 command_runner.go:130] > Jan 27 12:34:11 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0127 12:36:54.739374    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0127 12:36:54.739374    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0127 12:36:54.739374    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0127 12:36:54.739374    9948 command_runner.go:130] > Jan 27 12:34:14 minikube cri-dockerd[404]: time="2025-01-27T12:34:14Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0127 12:36:54.739374    9948 command_runner.go:130] > Jan 27 12:34:14 minikube cri-dockerd[404]: time="2025-01-27T12:34:14Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0127 12:36:54.739374    9948 command_runner.go:130] > Jan 27 12:34:14 minikube cri-dockerd[404]: time="2025-01-27T12:34:14Z" level=info msg="Start docker client with request timeout 0s"
	I0127 12:36:54.739905    9948 command_runner.go:130] > Jan 27 12:34:14 minikube cri-dockerd[404]: time="2025-01-27T12:34:14Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0127 12:36:54.739964    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:54.739964    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0127 12:36:54.739964    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0127 12:36:54.740066    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0127 12:36:54.740066    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0127 12:36:54.740066    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0127 12:36:54.740066    9948 command_runner.go:130] > Jan 27 12:34:16 minikube cri-dockerd[425]: time="2025-01-27T12:34:16Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:34:16 minikube cri-dockerd[425]: time="2025-01-27T12:34:16Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:34:16 minikube cri-dockerd[425]: time="2025-01-27T12:34:16Z" level=info msg="Start docker client with request timeout 0s"
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:34:16 minikube cri-dockerd[425]: time="2025-01-27T12:34:16Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 systemd[1]: Starting Docker Application Container Engine...
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[653]: time="2025-01-27T12:35:01.316616305Z" level=info msg="Starting up"
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[653]: time="2025-01-27T12:35:01.317424338Z" level=info msg="containerd not running, starting managed containerd"
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[653]: time="2025-01-27T12:35:01.318870498Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=659
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.350184287Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374094572Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374181575Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374315681Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374337282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374861203Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374889804Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375040811Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:54.740130    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375239819Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:54.740655    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375267320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:54.740708    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375281220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:54.740708    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375833643Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:54.740708    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.376559373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:54.740797    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.379449292Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:54.740824    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.379538296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.379661901Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.379800807Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.380313228Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.380441533Z" level=info msg="metadata content store policy set" policy=shared
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.385960360Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386099266Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386121867Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386137768Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386151968Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386229971Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386475981Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386600687Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386685890Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386757893Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386815695Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386833196Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386854497Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386882698Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386897399Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386908999Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386920500Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386931000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386948401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386962701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387079606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387099107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.740888    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387131708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.741407    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387149509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.741407    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387164010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.741466    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387179110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.741466    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387194311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.741466    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387212812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.741466    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387227412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.741556    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387242613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.741556    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387257314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.741556    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387275514Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0127 12:36:54.741556    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387300315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.741637    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387352418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.741637    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387385019Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0127 12:36:54.741637    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387423920Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0127 12:36:54.741694    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387443921Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387454422Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387465222Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387473923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387486423Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387496523Z" level=info msg="NRI interface is disabled by configuration."
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.388077647Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.388176351Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.388221553Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.388239554Z" level=info msg="containerd successfully booted in 0.040630s"
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:02 multinode-659000 dockerd[653]: time="2025-01-27T12:35:02.375461301Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:02 multinode-659000 dockerd[653]: time="2025-01-27T12:35:02.619440119Z" level=info msg="Loading containers: start."
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:02 multinode-659000 dockerd[653]: time="2025-01-27T12:35:02.931712674Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.079754338Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.199112944Z" level=info msg="Loading containers: done."
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.227370410Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.227394111Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.227415612Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.227924231Z" level=info msg="Daemon has completed initialization"
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.267619030Z" level=info msg="API listen on /var/run/docker.sock"
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.267851638Z" level=info msg="API listen on [::]:2376"
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 systemd[1]: Started Docker Application Container Engine.
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.208684124Z" level=info msg="Processing signal 'terminated'"
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.210887831Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.211188432Z" level=info msg="Daemon shutdown complete"
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.211249132Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.211349733Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 systemd[1]: Stopping Docker Application Container Engine...
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 systemd[1]: docker.service: Deactivated successfully.
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 systemd[1]: Stopped Docker Application Container Engine.
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 systemd[1]: Starting Docker Application Container Engine...
	I0127 12:36:54.741742    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:29.270852796Z" level=info msg="Starting up"
	I0127 12:36:54.742265    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:29.271817099Z" level=info msg="containerd not running, starting managed containerd"
	I0127 12:36:54.742265    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:29.272921603Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1109
	I0127 12:36:54.742265    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.304741210Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0127 12:36:54.742265    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329258592Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0127 12:36:54.742336    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329353092Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0127 12:36:54.742336    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329390892Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0127 12:36:54.742336    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329406192Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:54.742336    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329428593Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:54.742435    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329441293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:54.742454    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329563193Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329667793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329687993Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329698693Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329723194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329854194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.332844104Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.332945004Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333117005Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333187905Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333222205Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333244905Z" level=info msg="metadata content store policy set" policy=shared
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333669407Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333741907Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333760007Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333804107Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333825507Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333876808Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334348509Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334487410Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334670410Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334694510Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334722510Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334740210Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334754110Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334768211Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.742509    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334783611Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.743033    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334797111Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.743117    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334827611Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334839711Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334900511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334918411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334939711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334956111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334972911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335000311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335303412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335328412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335345712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335365113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335379713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335394013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335408713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335432513Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335458213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335473813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335509613Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335706914Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335751914Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335766514Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335779214Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335790814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335808914Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0127 12:36:54.743155    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335823714Z" level=info msg="NRI interface is disabled by configuration."
	I0127 12:36:54.743675    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.336050915Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0127 12:36:54.743726    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.336227915Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0127 12:36:54.743726    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.336312916Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0127 12:36:54.743726    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.336356016Z" level=info msg="containerd successfully booted in 0.033394s"
	I0127 12:36:54.743726    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.313483202Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0127 12:36:54.743726    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.352802934Z" level=info msg="Loading containers: start."
	I0127 12:36:54.743818    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.586901421Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0127 12:36:54.743876    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.690006868Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0127 12:36:54.743897    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.804531453Z" level=info msg="Loading containers: done."
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.832567747Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.832684748Z" level=info msg="Daemon has completed initialization"
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.868895669Z" level=info msg="API listen on /var/run/docker.sock"
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 systemd[1]: Started Docker Application Container Engine.
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.869822273Z" level=info msg="API listen on [::]:2376"
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Start docker client with request timeout 0s"
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Loaded network plugin cni"
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Start cri-dockerd grpc backend"
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:36Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-58667487b6-2jq9j_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"4c82c0ec4aeaa9b21462a8248326ae982d6f7a0aee31347f1a58d216f0335177\""
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:36Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-668d6bf9bc-2qw6w_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"4a53e133a1cd6ab9514cb15ac3c4f1d5683d17008b482cebb08bf4809e060709\""
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.148610487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.743919    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.149713190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.744452    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.149731191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.744452    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.149823291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.744503    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.227312151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.744543    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.227946754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.744583    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.228465355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.744657    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.229058857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.744657    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b770a357d98307d140bf1525f91cca5fa9278f7f9428b9b956db31e6a36de7f2/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.326758786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.326897686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.327082287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.327397788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.340486032Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.340542232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.340557232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.340640833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/910315897d84204b3db03c56eaeac0c855a23f6250a406220a840c10e2dad7a7/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5601285bb260a8ced44a77e9dbb10f08580841c917885470ec5941525f08ee76/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cdf534e99b2bbcc52d3bf2ce73ef5d4299b5264cf0a050fa21ff7f6fe2bb3b2a/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.671974447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.672075247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.672094947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.673787353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.761333147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.761791949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.761989149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.763491554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.875104030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.875307231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.879314144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.879751245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.744718    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.905404632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.745241    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.905473732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.905487532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.905580032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:41Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:42.944884578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:42.944962279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:42.944975379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:42.945417180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.028307259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.028541060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.028779960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.029212562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.033020375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.033338176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.033463276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.033775977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/34d579bb511fec290478f20b13002063b43c1a71bd6f2f45f1d83bbd8ac971ab/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b613e9a7a356580fd5381e358408317fd6120a119c23f3f196adda302e5ca97f/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d43e4cc62e0877d4b65191623d58195cd33c60eff33c6e49e605f69620d5115f/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.564400062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.564959364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.565260665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.565864167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.745314    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.593549260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.745850    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.594548363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.745850    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.594809964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.745850    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.595677067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.745850    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.831064858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.745850    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.831237859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.745988    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.831252459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.746043    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.831462360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.746076    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:14.113708902Z" level=info msg="shim disconnected" id=9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f namespace=moby
	I0127 12:36:54.746076    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:14.113811702Z" level=warning msg="cleaning up after shim disconnected" id=9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f namespace=moby
	I0127 12:36:54.746076    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:14.113825002Z" level=info msg="cleaning up dead shim" namespace=moby
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 dockerd[1103]: time="2025-01-27T12:36:14.115914814Z" level=info msg="ignoring event" container=9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:27.602318882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:27.604079090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:27.604098490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:27.604656892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.795612113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.795786714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.796654617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.796995818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.861006350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.861082751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.861094651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.861334452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:36:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6b22dbb5ef3e0d283203499fffad001c9c20c643564a55e5bfa5d6352f80e178/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:36:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ef504f99724cba01531b3894329439ae069a4ccac272e31bfac333cc24e62c53/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.321502068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.321825070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.321903471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.322491776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.384958874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.385201176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.385326577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.746132    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.385735080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:54.772096    9948 logs.go:123] Gathering logs for coredns [b3a9ed6e130c] ...
	I0127 12:36:54.772096    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a9ed6e130c"
	I0127 12:36:54.800524    9948 command_runner.go:130] > .:53
	I0127 12:36:54.800524    9948 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 5e2e325279dfa828a8fd1b44d83ab4703abb0247d4beadde42157147650fe687c0862eaa4caa15a5d9139c48c9a9dd5ec3cd962ba60368e8ffb4d02ae4d29aeb
	I0127 12:36:54.800524    9948 command_runner.go:130] > CoreDNS-1.11.3
	I0127 12:36:54.800524    9948 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0127 12:36:54.800524    9948 command_runner.go:130] > [INFO] 127.0.0.1:47464 - 34099 "HINFO IN 5313391549706874198.1206200090770907475. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.062040871s
	I0127 12:36:54.800524    9948 logs.go:123] Gathering logs for coredns [f818dd15d8b0] ...
	I0127 12:36:54.800524    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f818dd15d8b0"
	I0127 12:36:54.829398    9948 command_runner.go:130] > .:53
	I0127 12:36:54.829398    9948 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 5e2e325279dfa828a8fd1b44d83ab4703abb0247d4beadde42157147650fe687c0862eaa4caa15a5d9139c48c9a9dd5ec3cd962ba60368e8ffb4d02ae4d29aeb
	I0127 12:36:54.829398    9948 command_runner.go:130] > CoreDNS-1.11.3
	I0127 12:36:54.829398    9948 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 127.0.0.1:50782 - 35950 "HINFO IN 8787717511470146079.8254135695837817311. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.151481959s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:56186 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000430505s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:58756 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.126738988s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:36399 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.053330342s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:35359 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.140941591s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:41150 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000220803s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:57591 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0000709s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:45132 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000133002s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:48593 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000728s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:53274 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000261802s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:57676 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.069110701s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:59948 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000178302s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:39801 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000198802s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:45673 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.023238636s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:42840 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154002s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:43505 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000181002s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:34935 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092101s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:54822 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155102s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:50877 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000188102s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:45384 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183802s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:35073 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000227202s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:50517 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000061101s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:37353 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130501s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:42117 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114301s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:46171 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060401s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:55282 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117601s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:41761 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162301s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:35358 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000218902s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:50342 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124402s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:38159 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159602s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:37043 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000171002s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:50762 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000168301s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.1.2:33014 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000603s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:34941 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134301s
	I0127 12:36:54.829531    9948 command_runner.go:130] > [INFO] 10.244.0.3:60117 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000393904s
	I0127 12:36:54.830058    9948 command_runner.go:130] > [INFO] 10.244.0.3:47506 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000214402s
	I0127 12:36:54.830058    9948 command_runner.go:130] > [INFO] 10.244.0.3:42968 - 5 "PTR IN 1.192.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000443604s
	I0127 12:36:54.830114    9948 command_runner.go:130] > [INFO] 10.244.1.2:52260 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193802s
	I0127 12:36:54.830114    9948 command_runner.go:130] > [INFO] 10.244.1.2:40492 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000310903s
	I0127 12:36:54.830114    9948 command_runner.go:130] > [INFO] 10.244.1.2:50341 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074s
	I0127 12:36:54.830114    9948 command_runner.go:130] > [INFO] 10.244.1.2:41676 - 5 "PTR IN 1.192.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000637s
	I0127 12:36:54.830114    9948 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0127 12:36:54.830114    9948 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0127 12:36:54.832376    9948 logs.go:123] Gathering logs for kindnet [373bec67270f] ...
	I0127 12:36:54.832376    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 373bec67270f"
	I0127 12:36:54.859435    9948 command_runner.go:130] ! I0127 12:35:44.464092       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0127 12:36:54.859435    9948 command_runner.go:130] ! I0127 12:35:44.489651       1 main.go:139] hostIP = 172.29.198.106
	I0127 12:36:54.859541    9948 command_runner.go:130] ! podIP = 172.29.198.106
	I0127 12:36:54.859541    9948 command_runner.go:130] ! I0127 12:35:44.489794       1 main.go:148] setting mtu 1500 for CNI 
	I0127 12:36:54.859541    9948 command_runner.go:130] ! I0127 12:35:44.489865       1 main.go:178] kindnetd IP family: "ipv4"
	I0127 12:36:54.859541    9948 command_runner.go:130] ! I0127 12:35:44.490024       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0127 12:36:54.859541    9948 command_runner.go:130] ! I0127 12:35:45.397363       1 main.go:239] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-40: Error: Could not process rule: Operation not supported
	I0127 12:36:54.859623    9948 command_runner.go:130] ! add table inet kindnet-network-policies
	I0127 12:36:54.859623    9948 command_runner.go:130] ! ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:54.859623    9948 command_runner.go:130] ! , skipping network policies
	I0127 12:36:54.859661    9948 command_runner.go:130] ! W0127 12:36:15.407551       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0127 12:36:54.859661    9948 command_runner.go:130] ! E0127 12:36:15.407870       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError"
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:25.405793       1 main.go:297] Handling node with IPs: map[172.29.198.106:{}]
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:25.405967       1 main.go:301] handling current node
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:25.406822       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:25.406903       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:25.408014       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.29.199.129 Flags: [] Table: 0 Realm: 0} 
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:25.408956       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:25.409055       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:25.409321       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.29.206.88 Flags: [] Table: 0 Realm: 0} 
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:35.400986       1 main.go:297] Handling node with IPs: map[172.29.198.106:{}]
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:35.401115       1 main.go:301] handling current node
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:35.401203       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:35.401377       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:35.401789       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:35.401927       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:45.400837       1 main.go:297] Handling node with IPs: map[172.29.198.106:{}]
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:45.401002       1 main.go:301] handling current node
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:45.401061       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:45.401072       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:45.401385       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.859661    9948 command_runner.go:130] ! I0127 12:36:45.401462       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.861566    9948 logs.go:123] Gathering logs for kindnet [d758000dda95] ...
	I0127 12:36:54.861566    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d758000dda95"
	I0127 12:36:54.887046    9948 command_runner.go:130] ! I0127 12:22:14.854106       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.887809    9948 command_runner.go:130] ! I0127 12:22:14.855096       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.887809    9948 command_runner.go:130] ! I0127 12:22:14.855184       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.887809    9948 command_runner.go:130] ! I0127 12:22:24.859265       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.887897    9948 command_runner.go:130] ! I0127 12:22:24.859464       1 main.go:301] handling current node
	I0127 12:36:54.887897    9948 command_runner.go:130] ! I0127 12:22:24.859638       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.887982    9948 command_runner.go:130] ! I0127 12:22:24.859681       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.887999    9948 command_runner.go:130] ! I0127 12:22:24.860150       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.887999    9948 command_runner.go:130] ! I0127 12:22:24.860242       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.888022    9948 command_runner.go:130] ! I0127 12:22:34.860201       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.888042    9948 command_runner.go:130] ! I0127 12:22:34.860282       1 main.go:301] handling current node
	I0127 12:36:54.888077    9948 command_runner.go:130] ! I0127 12:22:34.860531       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.888077    9948 command_runner.go:130] ! I0127 12:22:34.860551       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.888106    9948 command_runner.go:130] ! I0127 12:22:34.861114       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.888126    9948 command_runner.go:130] ! I0127 12:22:34.861204       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.888126    9948 command_runner.go:130] ! I0127 12:22:44.853677       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.888126    9948 command_runner.go:130] ! I0127 12:22:44.853737       1 main.go:301] handling current node
	I0127 12:36:54.888164    9948 command_runner.go:130] ! I0127 12:22:44.853761       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.888164    9948 command_runner.go:130] ! I0127 12:22:44.853838       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.888200    9948 command_runner.go:130] ! I0127 12:22:44.855661       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.888200    9948 command_runner.go:130] ! I0127 12:22:44.855749       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.888236    9948 command_runner.go:130] ! I0127 12:22:54.856510       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.888236    9948 command_runner.go:130] ! I0127 12:22:54.856632       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.888323    9948 command_runner.go:130] ! I0127 12:22:54.857002       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.888345    9948 command_runner.go:130] ! I0127 12:22:54.857030       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.888345    9948 command_runner.go:130] ! I0127 12:22:54.857252       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:22:54.857371       1 main.go:301] handling current node
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:04.859476       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:04.859579       1 main.go:301] handling current node
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:04.859615       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:04.859623       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:04.859972       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:04.859987       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:14.853396       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:14.853515       1 main.go:301] handling current node
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:14.853537       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:14.853546       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:14.853802       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:14.853843       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:24.853600       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:24.853883       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:24.854392       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:24.854484       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:24.854688       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:24.854773       1 main.go:301] handling current node
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:34.853542       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:34.853600       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:34.854132       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:34.854286       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:34.854787       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:34.854920       1 main.go:301] handling current node
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:44.856707       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:44.856833       1 main.go:301] handling current node
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:44.856869       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:44.856877       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.888371    9948 command_runner.go:130] ! I0127 12:23:44.857371       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.888925    9948 command_runner.go:130] ! I0127 12:23:44.857460       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.888969    9948 command_runner.go:130] ! I0127 12:23:54.853590       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.888969    9948 command_runner.go:130] ! I0127 12:23:54.853737       1 main.go:301] handling current node
	I0127 12:36:54.888969    9948 command_runner.go:130] ! I0127 12:23:54.853759       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.888969    9948 command_runner.go:130] ! I0127 12:23:54.853768       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889009    9948 command_runner.go:130] ! I0127 12:23:54.854333       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889033    9948 command_runner.go:130] ! I0127 12:23:54.854403       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889033    9948 command_runner.go:130] ! I0127 12:24:04.862983       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889033    9948 command_runner.go:130] ! I0127 12:24:04.863248       1 main.go:301] handling current node
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:04.863599       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:04.863808       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:04.864418       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:04.864558       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:14.854114       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:14.854152       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:14.854412       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:14.854490       1 main.go:301] handling current node
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:14.854619       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:14.854711       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:24.857372       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:24.857503       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:24.857861       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:24.857991       1 main.go:301] handling current node
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:24.858058       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:24.858126       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:34.854371       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:34.854425       1 main.go:301] handling current node
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:34.854444       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:34.854451       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:34.855276       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:34.855359       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:44.862967       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:44.863069       1 main.go:301] handling current node
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:44.863118       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:44.863132       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:44.863438       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:44.863559       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:54.856232       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:54.856343       1 main.go:301] handling current node
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:54.856417       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:54.856429       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:54.857056       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:24:54.857188       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:25:04.853438       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:25:04.853551       1 main.go:301] handling current node
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:25:04.853573       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:25:04.853581       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:25:04.853903       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889102    9948 command_runner.go:130] ! I0127 12:25:04.853979       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889638    9948 command_runner.go:130] ! I0127 12:25:14.854463       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889638    9948 command_runner.go:130] ! I0127 12:25:14.854571       1 main.go:301] handling current node
	I0127 12:36:54.889638    9948 command_runner.go:130] ! I0127 12:25:14.854614       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889638    9948 command_runner.go:130] ! I0127 12:25:14.854630       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889638    9948 command_runner.go:130] ! I0127 12:25:14.855124       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889638    9948 command_runner.go:130] ! I0127 12:25:14.855157       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889724    9948 command_runner.go:130] ! I0127 12:25:24.853742       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889724    9948 command_runner.go:130] ! I0127 12:25:24.853838       1 main.go:301] handling current node
	I0127 12:36:54.889724    9948 command_runner.go:130] ! I0127 12:25:24.853859       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889724    9948 command_runner.go:130] ! I0127 12:25:24.853866       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889724    9948 command_runner.go:130] ! I0127 12:25:24.854822       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889777    9948 command_runner.go:130] ! I0127 12:25:24.854982       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889777    9948 command_runner.go:130] ! I0127 12:25:34.853374       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889777    9948 command_runner.go:130] ! I0127 12:25:34.853516       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889777    9948 command_runner.go:130] ! I0127 12:25:34.853756       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:34.853919       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:34.854285       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:34.854360       1 main.go:301] handling current node
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:44.855075       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:44.855182       1 main.go:301] handling current node
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:44.855201       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:44.855209       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:44.856108       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:44.856191       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:54.854358       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:54.854550       1 main.go:301] handling current node
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:54.854584       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:54.854606       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:54.854829       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:25:54.854893       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:04.853425       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:04.853480       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:04.854150       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:04.854221       1 main.go:301] handling current node
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:04.854322       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:04.854350       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:14.853895       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:14.854577       1 main.go:301] handling current node
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:14.854615       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:14.854639       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:14.856224       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:14.856319       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:24.858046       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:24.858200       1 main.go:301] handling current node
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:24.858527       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:24.858599       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:24.859022       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:24.859118       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:34.853783       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:34.853853       1 main.go:301] handling current node
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:34.853871       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:34.853878       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:34.854193       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:34.854260       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:44.856492       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:44.856552       1 main.go:301] handling current node
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:44.856569       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:44.856575       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:44.857163       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:44.857246       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:54.858285       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:54.858431       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:54.859101       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:54.859322       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:54.859474       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:26:54.859544       1 main.go:301] handling current node
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:04.858831       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:04.858967       1 main.go:301] handling current node
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:04.859484       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:04.859592       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:04.860213       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:04.860314       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:14.854313       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:14.854366       1 main.go:301] handling current node
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:14.854386       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:14.854394       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:14.854883       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:14.855322       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:24.859182       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:24.859342       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:24.859757       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:24.859824       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.889821    9948 command_runner.go:130] ! I0127 12:27:24.860078       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:24.860255       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:34.854206       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:34.854462       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:34.854567       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:34.854657       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:34.855188       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:34.855233       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:44.861342       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:44.861572       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:44.862224       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:44.862399       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:44.862648       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:44.862687       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:54.853605       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:54.853658       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:54.853924       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:54.854125       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:54.854203       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:27:54.854216       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:04.859858       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:04.859922       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:04.859984       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:04.860038       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:04.860336       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:04.860450       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:14.853470       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:14.853607       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:14.853627       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:14.853634       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:14.854800       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:14.854899       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:24.853786       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:24.853841       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:24.854051       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:24.854078       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:24.854192       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:24.854297       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:34.853571       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:34.853730       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:34.853756       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:34.853765       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:34.853988       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:34.854180       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:44.853630       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:44.854161       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:44.854753       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:44.854886       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:44.855270       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:44.855393       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:54.856731       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:54.856780       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:54.856800       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:54.856807       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:54.857466       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:28:54.857531       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:04.853996       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:04.854093       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:04.854113       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:04.854120       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:04.854865       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:04.855000       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:14.853874       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:14.854279       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:14.854677       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:14.854896       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:14.855469       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:14.856845       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:24.853660       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:24.853766       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:24.853786       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:24.853793       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:24.854261       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:24.854541       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:34.861616       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:34.861807       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:34.862166       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:34.862228       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:34.862400       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:34.862455       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:44.854294       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:44.854418       1 main.go:301] handling current node
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:44.854439       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:44.854448       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:44.854699       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:44.854776       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:54.853707       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.890676    9948 command_runner.go:130] ! I0127 12:29:54.853780       1 main.go:301] handling current node
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:29:54.853914       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:29:54.854022       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:29:54.854423       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:29:54.854566       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:04.853625       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:04.853820       1 main.go:301] handling current node
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:04.854002       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:04.854301       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:04.854878       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:04.854986       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:14.853537       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:14.853729       1 main.go:301] handling current node
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:14.853749       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:14.853756       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:14.855013       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:14.855147       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:24.853563       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:24.853757       1 main.go:301] handling current node
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:24.853779       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:24.853786       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:24.854220       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:24.854327       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:34.858899       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:34.859124       1 main.go:301] handling current node
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:34.859146       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:34.859676       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:34.860572       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:34.860819       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:44.858769       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:44.858890       1 main.go:301] handling current node
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:44.858912       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:44.858920       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:44.859720       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:44.859809       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:54.855090       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:54.855134       1 main.go:301] handling current node
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:54.855151       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:54.855157       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:54.855561       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:30:54.855573       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:04.854121       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:04.854237       1 main.go:301] handling current node
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:04.854256       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:04.854263       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:04.854424       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:04.854452       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:04.854544       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.29.206.88 Flags: [] Table: 0 Realm: 0} 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:14.853651       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:14.853750       1 main.go:301] handling current node
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:14.853771       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:14.853778       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:14.854005       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:14.854084       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:24.854114       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:24.854161       1 main.go:301] handling current node
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:24.854212       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:24.854223       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:24.854591       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:24.854666       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:34.862705       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:34.862793       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:34.863105       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:34.863140       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:34.863334       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:34.863362       1 main.go:301] handling current node
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:44.855275       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:44.855421       1 main.go:301] handling current node
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:44.855462       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:44.855496       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:44.856579       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:44.856690       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:54.856288       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.891686    9948 command_runner.go:130] ! I0127 12:31:54.856579       1 main.go:301] handling current node
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:31:54.856914       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:31:54.857065       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:31:54.857508       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:31:54.857553       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:04.853556       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:04.853630       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:04.854583       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:04.854615       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:04.857114       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:04.857217       1 main.go:301] handling current node
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:14.854183       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:14.854348       1 main.go:301] handling current node
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:14.854376       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:14.854402       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:14.854890       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:14.854992       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:24.853770       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:24.854222       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:24.854498       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:24.854573       1 main.go:301] handling current node
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:24.854606       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:24.854613       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:34.853556       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:34.853715       1 main.go:301] handling current node
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:34.853749       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:34.853879       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:34.854386       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:34.854469       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:44.853378       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:44.853424       1 main.go:301] handling current node
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:44.853441       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:44.853447       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:44.853735       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:44.853765       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:54.859317       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:54.859396       1 main.go:301] handling current node
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:54.859415       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:54.859421       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:54.859756       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:32:54.859853       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:33:04.861975       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:33:04.862085       1 main.go:301] handling current node
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:33:04.862106       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:33:04.862113       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:33:04.862780       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:33:04.862861       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:33:14.853823       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:33:14.853859       1 main.go:301] handling current node
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:33:14.853877       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:33:14.853884       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:33:14.854153       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:54.892675    9948 command_runner.go:130] ! I0127 12:33:14.854165       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:54.908680    9948 logs.go:123] Gathering logs for container status ...
	I0127 12:36:54.908680    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:36:54.963794    9948 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0127 12:36:54.963794    9948 command_runner.go:130] > 528243cca8bfb       8c811b4aec35f                                                                                         7 seconds ago        Running             busybox                   1                   ef504f99724cb       busybox-58667487b6-2jq9j
	I0127 12:36:54.963794    9948 command_runner.go:130] > b3a9ed6e130c0       c69fa2e9cbf5f                                                                                         7 seconds ago        Running             coredns                   1                   6b22dbb5ef3e0       coredns-668d6bf9bc-2qw6w
	I0127 12:36:54.963794    9948 command_runner.go:130] > 389606c183b19       6e38f40d628db                                                                                         27 seconds ago       Running             storage-provisioner       2                   b613e9a7a3565       storage-provisioner
	I0127 12:36:54.963794    9948 command_runner.go:130] > 373bec67270fb       50415e5d05f05                                                                                         About a minute ago   Running             kindnet-cni               1                   d43e4cc62e087       kindnet-z2hqq
	I0127 12:36:54.963794    9948 command_runner.go:130] > 9b2db1d0cb61c       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   b613e9a7a3565       storage-provisioner
	I0127 12:36:54.963794    9948 command_runner.go:130] > 0283b35dee3cc       e29f9c7391fd9                                                                                         About a minute ago   Running             kube-proxy                1                   34d579bb511fe       kube-proxy-s46mv
	I0127 12:36:54.963794    9948 command_runner.go:130] > ea993630a3109       95c0bda56fc4d                                                                                         About a minute ago   Running             kube-apiserver            0                   5601285bb260a       kube-apiserver-multinode-659000
	I0127 12:36:54.963794    9948 command_runner.go:130] > 0ef2a3b50bae8       a9e7e6b294baf                                                                                         About a minute ago   Running             etcd                      0                   cdf534e99b2bb       etcd-multinode-659000
	I0127 12:36:54.963794    9948 command_runner.go:130] > ed51c7eaa9666       2b0d6572d062c                                                                                         About a minute ago   Running             kube-scheduler            1                   910315897d842       kube-scheduler-multinode-659000
	I0127 12:36:54.963794    9948 command_runner.go:130] > 8d4872cda28de       019ee182b58e2                                                                                         About a minute ago   Running             kube-controller-manager   1                   b770a357d9830       kube-controller-manager-multinode-659000
	I0127 12:36:54.963794    9948 command_runner.go:130] > 998a64b2baa2d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   4c82c0ec4aeaa       busybox-58667487b6-2jq9j
	I0127 12:36:54.963794    9948 command_runner.go:130] > f818dd15d8b02       c69fa2e9cbf5f                                                                                         24 minutes ago       Exited              coredns                   0                   4a53e133a1cd6       coredns-668d6bf9bc-2qw6w
	I0127 12:36:54.963794    9948 command_runner.go:130] > d758000dda95d       kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108              24 minutes ago       Exited              kindnet-cni               0                   f2d0bd65fe50d       kindnet-z2hqq
	I0127 12:36:54.963794    9948 command_runner.go:130] > bbec7ccef7da5       e29f9c7391fd9                                                                                         24 minutes ago       Exited              kube-proxy                0                   319cddeebceb6       kube-proxy-s46mv
	I0127 12:36:54.963794    9948 command_runner.go:130] > a16e06a038601       2b0d6572d062c                                                                                         25 minutes ago       Exited              kube-scheduler            0                   5423fc5113290       kube-scheduler-multinode-659000
	I0127 12:36:54.964316    9948 command_runner.go:130] > e07a66f8f6196       019ee182b58e2                                                                                         25 minutes ago       Exited              kube-controller-manager   0                   1bd5bf99bede3       kube-controller-manager-multinode-659000
	I0127 12:36:54.966091    9948 logs.go:123] Gathering logs for dmesg ...
	I0127 12:36:54.966091    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:36:54.984232    9948 command_runner.go:130] > [Jan27 12:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0127 12:36:54.984232    9948 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0127 12:36:54.984319    9948 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0127 12:36:54.984319    9948 command_runner.go:130] > [  +0.124628] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0127 12:36:54.984319    9948 command_runner.go:130] > [  +0.022511] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0127 12:36:54.984381    9948 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0127 12:36:54.984381    9948 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0127 12:36:54.984381    9948 command_runner.go:130] > [  +0.069272] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0127 12:36:54.984425    9948 command_runner.go:130] > [  +0.020914] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0127 12:36:54.984425    9948 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0127 12:36:54.984476    9948 command_runner.go:130] > [Jan27 12:34] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0127 12:36:54.984476    9948 command_runner.go:130] > [  +0.706235] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0127 12:36:54.984476    9948 command_runner.go:130] > [  +1.791193] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0127 12:36:54.984518    9948 command_runner.go:130] > [  +6.780102] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0127 12:36:54.984536    9948 command_runner.go:130] > [  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0127 12:36:54.984568    9948 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0127 12:36:54.984568    9948 command_runner.go:130] > [Jan27 12:35] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	I0127 12:36:54.984609    9948 command_runner.go:130] > [  +0.194598] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	I0127 12:36:54.984642    9948 command_runner.go:130] > [ +25.881577] systemd-fstab-generator[1029]: Ignoring "noauto" option for root device
	I0127 12:36:54.984642    9948 command_runner.go:130] > [  +0.104839] kauditd_printk_skb: 75 callbacks suppressed
	I0127 12:36:54.984696    9948 command_runner.go:130] > [  +0.497850] systemd-fstab-generator[1069]: Ignoring "noauto" option for root device
	I0127 12:36:54.984696    9948 command_runner.go:130] > [  +0.189754] systemd-fstab-generator[1081]: Ignoring "noauto" option for root device
	I0127 12:36:54.984696    9948 command_runner.go:130] > [  +0.209865] systemd-fstab-generator[1095]: Ignoring "noauto" option for root device
	I0127 12:36:54.984749    9948 command_runner.go:130] > [  +2.995294] systemd-fstab-generator[1337]: Ignoring "noauto" option for root device
	I0127 12:36:54.984766    9948 command_runner.go:130] > [  +0.193187] systemd-fstab-generator[1349]: Ignoring "noauto" option for root device
	I0127 12:36:54.984798    9948 command_runner.go:130] > [  +0.167597] systemd-fstab-generator[1361]: Ignoring "noauto" option for root device
	I0127 12:36:54.984798    9948 command_runner.go:130] > [  +0.247752] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	I0127 12:36:54.984798    9948 command_runner.go:130] > [  +0.858687] systemd-fstab-generator[1500]: Ignoring "noauto" option for root device
	I0127 12:36:54.984798    9948 command_runner.go:130] > [  +0.090112] kauditd_printk_skb: 206 callbacks suppressed
	I0127 12:36:54.984798    9948 command_runner.go:130] > [  +3.380441] systemd-fstab-generator[1641]: Ignoring "noauto" option for root device
	I0127 12:36:54.984858    9948 command_runner.go:130] > [  +1.786352] kauditd_printk_skb: 64 callbacks suppressed
	I0127 12:36:54.984884    9948 command_runner.go:130] > [  +5.236723] kauditd_printk_skb: 10 callbacks suppressed
	I0127 12:36:54.984884    9948 command_runner.go:130] > [  +4.105586] systemd-fstab-generator[2522]: Ignoring "noauto" option for root device
	I0127 12:36:54.984884    9948 command_runner.go:130] > [Jan27 12:36] kauditd_printk_skb: 70 callbacks suppressed
	I0127 12:36:54.986509    9948 logs.go:123] Gathering logs for kube-proxy [0283b35dee3c] ...
	I0127 12:36:54.986581    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0283b35dee3c"
	I0127 12:36:55.011717    9948 command_runner.go:130] ! I0127 12:35:44.449716       1 server_linux.go:66] "Using iptables proxy"
	I0127 12:36:55.011893    9948 command_runner.go:130] ! E0127 12:35:44.569403       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0127 12:36:55.011893    9948 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0127 12:36:55.011893    9948 command_runner.go:130] ! 	add table ip kube-proxy
	I0127 12:36:55.011959    9948 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:55.011959    9948 command_runner.go:130] !  >
	I0127 12:36:55.011959    9948 command_runner.go:130] ! E0127 12:35:44.599245       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0127 12:36:55.011959    9948 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0127 12:36:55.011959    9948 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0127 12:36:55.011959    9948 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:55.011959    9948 command_runner.go:130] !  >
	I0127 12:36:55.012016    9948 command_runner.go:130] ! I0127 12:35:44.767652       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.198.106"]
	I0127 12:36:55.012059    9948 command_runner.go:130] ! E0127 12:35:44.770299       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 12:36:55.012059    9948 command_runner.go:130] ! I0127 12:35:45.038438       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 12:36:55.012119    9948 command_runner.go:130] ! I0127 12:35:45.038556       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 12:36:55.012143    9948 command_runner.go:130] ! I0127 12:35:45.038587       1 server_linux.go:170] "Using iptables Proxier"
	I0127 12:36:55.012171    9948 command_runner.go:130] ! I0127 12:35:45.043111       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 12:36:55.012171    9948 command_runner.go:130] ! I0127 12:35:45.045042       1 server.go:497] "Version info" version="v1.32.1"
	I0127 12:36:55.012207    9948 command_runner.go:130] ! I0127 12:35:45.045375       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:55.012231    9948 command_runner.go:130] ! I0127 12:35:45.053262       1 config.go:199] "Starting service config controller"
	I0127 12:36:55.012260    9948 command_runner.go:130] ! I0127 12:35:45.054808       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 12:36:55.012260    9948 command_runner.go:130] ! I0127 12:35:45.054873       1 config.go:329] "Starting node config controller"
	I0127 12:36:55.012260    9948 command_runner.go:130] ! I0127 12:35:45.054880       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 12:36:55.012260    9948 command_runner.go:130] ! I0127 12:35:45.058308       1 config.go:105] "Starting endpoint slice config controller"
	I0127 12:36:55.012260    9948 command_runner.go:130] ! I0127 12:35:45.058492       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 12:36:55.012260    9948 command_runner.go:130] ! I0127 12:35:45.155116       1 shared_informer.go:320] Caches are synced for node config
	I0127 12:36:55.012260    9948 command_runner.go:130] ! I0127 12:35:45.155116       1 shared_informer.go:320] Caches are synced for service config
	I0127 12:36:55.012260    9948 command_runner.go:130] ! I0127 12:35:45.159566       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 12:36:57.516042    9948 api_server.go:253] Checking apiserver healthz at https://172.29.198.106:8443/healthz ...
	I0127 12:36:57.526317    9948 api_server.go:279] https://172.29.198.106:8443/healthz returned 200:
	ok
	I0127 12:36:57.526857    9948 round_trippers.go:463] GET https://172.29.198.106:8443/version
	I0127 12:36:57.526857    9948 round_trippers.go:469] Request Headers:
	I0127 12:36:57.526857    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:36:57.526857    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:36:57.528764    9948 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0127 12:36:57.528764    9948 round_trippers.go:577] Response Headers:
	I0127 12:36:57.528764    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:36:57 GMT
	I0127 12:36:57.528764    9948 round_trippers.go:580]     Audit-Id: edec2ca6-9776-4d7e-8c95-dd9009a1e93c
	I0127 12:36:57.528764    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:36:57.528764    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:36:57.528764    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:36:57.528764    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:36:57.528764    9948 round_trippers.go:580]     Content-Length: 263
	I0127 12:36:57.528764    9948 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "32",
	  "gitVersion": "v1.32.1",
	  "gitCommit": "e9c9be4007d1664e68796af02b8978640d2c1b26",
	  "gitTreeState": "clean",
	  "buildDate": "2025-01-15T14:31:55Z",
	  "goVersion": "go1.23.4",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0127 12:36:57.528764    9948 api_server.go:141] control plane version: v1.32.1
	I0127 12:36:57.528764    9948 api_server.go:131] duration metric: took 3.6379845s to wait for apiserver health ...
	I0127 12:36:57.528764    9948 system_pods.go:43] waiting for kube-system pods to appear ...
	I0127 12:36:57.539654    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0127 12:36:57.569539    9948 command_runner.go:130] > ea993630a310
	I0127 12:36:57.569607    9948 logs.go:282] 1 containers: [ea993630a310]
	I0127 12:36:57.578470    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0127 12:36:57.607192    9948 command_runner.go:130] > 0ef2a3b50bae
	I0127 12:36:57.608336    9948 logs.go:282] 1 containers: [0ef2a3b50bae]
	I0127 12:36:57.616888    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0127 12:36:57.643533    9948 command_runner.go:130] > b3a9ed6e130c
	I0127 12:36:57.644282    9948 command_runner.go:130] > f818dd15d8b0
	I0127 12:36:57.644282    9948 logs.go:282] 2 containers: [b3a9ed6e130c f818dd15d8b0]
	I0127 12:36:57.654299    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0127 12:36:57.682621    9948 command_runner.go:130] > ed51c7eaa966
	I0127 12:36:57.682621    9948 command_runner.go:130] > a16e06a03860
	I0127 12:36:57.682621    9948 logs.go:282] 2 containers: [ed51c7eaa966 a16e06a03860]
	I0127 12:36:57.691785    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0127 12:36:57.718783    9948 command_runner.go:130] > 0283b35dee3c
	I0127 12:36:57.718783    9948 command_runner.go:130] > bbec7ccef7da
	I0127 12:36:57.720841    9948 logs.go:282] 2 containers: [0283b35dee3c bbec7ccef7da]
	I0127 12:36:57.733117    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0127 12:36:57.757712    9948 command_runner.go:130] > 8d4872cda28d
	I0127 12:36:57.758083    9948 command_runner.go:130] > e07a66f8f619
	I0127 12:36:57.758083    9948 logs.go:282] 2 containers: [8d4872cda28d e07a66f8f619]
	I0127 12:36:57.768668    9948 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0127 12:36:57.794534    9948 command_runner.go:130] > 373bec67270f
	I0127 12:36:57.794534    9948 command_runner.go:130] > d758000dda95
	I0127 12:36:57.794534    9948 logs.go:282] 2 containers: [373bec67270f d758000dda95]
	I0127 12:36:57.794534    9948 logs.go:123] Gathering logs for kubelet ...
	I0127 12:36:57.794721    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0127 12:36:57.825846    9948 command_runner.go:130] > Jan 27 12:35:32 multinode-659000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0127 12:36:57.825846    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1507]: I0127 12:35:33.096330    1507 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0127 12:36:57.825846    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1507]: I0127 12:35:33.097069    1507 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:57.825846    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1507]: I0127 12:35:33.098504    1507 server.go:954] "Client rotation is on, will bootstrap in background"
	I0127 12:36:57.826027    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1507]: E0127 12:35:33.099084    1507 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0127 12:36:57.826027    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:57.826154    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0127 12:36:57.826154    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0127 12:36:57.826154    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0127 12:36:57.826154    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0127 12:36:57.826277    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1565]: I0127 12:35:33.855505    1565 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0127 12:36:57.826277    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1565]: I0127 12:35:33.856023    1565 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:57.826277    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1565]: I0127 12:35:33.856456    1565 server.go:954] "Client rotation is on, will bootstrap in background"
	I0127 12:36:57.826376    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 kubelet[1565]: E0127 12:35:33.856573    1565 run.go:72] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0127 12:36:57.826616    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:57.826616    9948 command_runner.go:130] > Jan 27 12:35:33 multinode-659000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0127 12:36:57.826616    9948 command_runner.go:130] > Jan 27 12:35:34 multinode-659000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0127 12:36:57.826762    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0127 12:36:57.826762    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.167839    1648 server.go:520] "Kubelet version" kubeletVersion="v1.32.1"
	I0127 12:36:57.826762    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.168570    1648 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:57.826762    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.169526    1648 server.go:954] "Client rotation is on, will bootstrap in background"
	I0127 12:36:57.827036    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.171330    1648 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0127 12:36:57.827218    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.190537    1648 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:57.827292    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.208219    1648 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
	I0127 12:36:57.827370    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.208354    1648 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
	I0127 12:36:57.827370    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.217489    1648 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0127 12:36:57.827462    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.217603    1648 server.go:841] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0127 12:36:57.827561    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.218319    1648 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0127 12:36:57.827663    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.218396    1648 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-659000","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
	I0127 12:36:57.827663    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.218720    1648 topology_manager.go:138] "Creating topology manager with none policy"
	I0127 12:36:57.827771    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.218780    1648 container_manager_linux.go:304] "Creating device plugin manager"
	I0127 12:36:57.827771    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.219430    1648 state_mem.go:36] "Initialized new in-memory state store"
	I0127 12:36:57.827771    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.221396    1648 kubelet.go:446] "Attempting to sync node with API server"
	I0127 12:36:57.827898    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.221465    1648 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0127 12:36:57.827898    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.221524    1648 kubelet.go:352] "Adding apiserver pod source"
	I0127 12:36:57.827898    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.221568    1648 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0127 12:36:57.828004    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.230949    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-659000&limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:57.828004    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.231085    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-659000&limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:57.828123    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.232363    1648 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="docker" version="27.4.0" apiVersion="v1"
	I0127 12:36:57.828123    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.236967    1648 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0127 12:36:57.828224    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.237190    1648 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0127 12:36:57.828224    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.245589    1648 watchdog_linux.go:99] "Systemd watchdog is not enabled"
	I0127 12:36:57.828316    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.245760    1648 server.go:1287] "Started kubelet"
	I0127 12:36:57.828417    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.246317    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:57.828417    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.246411    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:57.828521    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.246814    1648 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0127 12:36:57.828521    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.247495    1648 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0127 12:36:57.828620    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.249106    1648 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0127 12:36:57.828620    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.260914    1648 server.go:169] "Starting to listen" address="0.0.0.0" port=10250
	I0127 12:36:57.828720    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.262947    1648 server.go:490] "Adding debug handlers to kubelet server"
	I0127 12:36:57.828720    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.264052    1648 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	I0127 12:36:57.828720    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.267083    1648 volume_manager.go:297] "Starting Kubelet Volume Manager"
	I0127 12:36:57.828822    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.267485    1648 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"multinode-659000\" not found"
	I0127 12:36:57.828930    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.270946    1648 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.29.198.106:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-659000.181e8cd12d2fa1af  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-659000,UID:multinode-659000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-659000,},FirstTimestamp:2025-01-27 12:35:36.245739951 +0000 UTC m=+0.150414507,LastTimestamp:2025-01-27 12:35:36.245739951 +0000 UTC m=+0.150414507,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-6
59000,}"
	I0127 12:36:57.829030    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.275270    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-659000?timeout=10s\": dial tcp 172.29.198.106:8443: connect: connection refused" interval="200ms"
	I0127 12:36:57.829082    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.275715    1648 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0127 12:36:57.829135    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.280615    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:57.829170    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.280911    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:57.829250    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.282354    1648 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0127 12:36:57.829250    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.282424    1648 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0127 12:36:57.829363    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.282441    1648 factory.go:221] Registration of the systemd container factory successfully
	I0127 12:36:57.829363    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.345823    1648 reconciler.go:26] "Reconciler: start to sync state"
	I0127 12:36:57.829478    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.348883    1648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0127 12:36:57.829478    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.352701    1648 cpu_manager.go:221] "Starting CPU manager" policy="none"
	I0127 12:36:57.829478    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.352736    1648 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s"
	I0127 12:36:57.829602    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.352866    1648 state_mem.go:36] "Initialized new in-memory state store"
	I0127 12:36:57.829602    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353577    1648 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0127 12:36:57.829705    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353729    1648 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0127 12:36:57.829705    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353769    1648 policy_none.go:49] "None policy: Start"
	I0127 12:36:57.829705    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353902    1648 memory_manager.go:186] "Starting memorymanager" policy="None"
	I0127 12:36:57.829705    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.353967    1648 state_mem.go:35] "Initializing new in-memory state store"
	I0127 12:36:57.829830    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.354751    1648 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0127 12:36:57.829894    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.354791    1648 status_manager.go:227] "Starting to sync pod status with apiserver"
	I0127 12:36:57.829894    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.354811    1648 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	I0127 12:36:57.830000    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.354819    1648 kubelet.go:2388] "Starting kubelet main sync loop"
	I0127 12:36:57.830000    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.354862    1648 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0127 12:36:57.830137    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.355393    1648 state_mem.go:75] "Updated machine memory state"
	I0127 12:36:57.830137    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: W0127 12:35:36.358802    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:57.830237    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.358857    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:57.830237    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.371233    1648 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"multinode-659000\" not found"
	I0127 12:36:57.830337    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.373395    1648 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0127 12:36:57.830337    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.373786    1648 eviction_manager.go:189] "Eviction manager: starting control loop"
	I0127 12:36:57.830444    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.373887    1648 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0127 12:36:57.830444    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.380088    1648 iptables.go:577] "Could not set up iptables canary" err=<
	I0127 12:36:57.830444    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0127 12:36:57.830543    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0127 12:36:57.830543    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0127 12:36:57.830543    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0127 12:36:57.830642    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.380760    1648 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
	I0127 12:36:57.830642    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.380984    1648 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-659000\" not found"
	I0127 12:36:57.830730    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.382902    1648 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0127 12:36:57.830821    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.468172    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.830821    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.468821    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c82c0ec4aeaa9b21462a8248326ae982d6f7a0aee31347f1a58d216f0335177"
	I0127 12:36:57.830937    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.468934    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2d0bd65fe50d3b8a64acf8ee065aa49d1a51b768c5fe6fe9532d26fa35aa7b1"
	I0127 12:36:57.830937    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.468988    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bd5bf99bede3e691e572fc4b8a37f4f42f8a9b2520adf8bc87bdf76e8258a4b"
	I0127 12:36:57.830937    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.469050    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5423fc5113290b937df9b531c5fbd748c5d927fd5e170e8126b67bae6a814384"
	I0127 12:36:57.831043    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.470252    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.831139    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.475717    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:57.831139    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.477090    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-659000?timeout=10s\": dial tcp 172.29.198.106:8443: connect: connection refused" interval="400ms"
	I0127 12:36:57.831278    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.480196    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.198.106:8443: connect: connection refused" node="multinode-659000"
	I0127 12:36:57.831278    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.487429    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc9ef8ee86ec2e354006c4c56f82fe9ec4df472096628ad620faba06fa0b1ff8"
	I0127 12:36:57.831393    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.508448    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a53e133a1cd6ab9514cb15ac3c4f1d5683d17008b482cebb08bf4809e060709"
	I0127 12:36:57.831393    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.523288    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="319cddeebceb6ec82b5865f1c67eaf88948a282ace1113869910f5bf8c717d83"
	I0127 12:36:57.831491    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.545844    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b522c4c9f4c776ea35298b9eaf7c05d64bddd6f385e12252bdf6aada9a3e20d"
	I0127 12:36:57.831491    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.566476    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e6c90fc43fa6c0754218ff1c4162045d-kubeconfig\") pod \"kube-scheduler-multinode-659000\" (UID: \"e6c90fc43fa6c0754218ff1c4162045d\") " pod="kube-system/kube-scheduler-multinode-659000"
	I0127 12:36:57.831589    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.566534    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b9fbd177058ba298cde2a92c4ef5c601-k8s-certs\") pod \"kube-apiserver-multinode-659000\" (UID: \"b9fbd177058ba298cde2a92c4ef5c601\") " pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:57.831683    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.566560    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-kubeconfig\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:57.831683    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567472    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:57.831799    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567527    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/575cefa3aa8017dce576fa244e719a4e-etcd-certs\") pod \"etcd-multinode-659000\" (UID: \"575cefa3aa8017dce576fa244e719a4e\") " pod="kube-system/etcd-multinode-659000"
	I0127 12:36:57.831898    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567546    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/575cefa3aa8017dce576fa244e719a4e-etcd-data\") pod \"etcd-multinode-659000\" (UID: \"575cefa3aa8017dce576fa244e719a4e\") " pod="kube-system/etcd-multinode-659000"
	I0127 12:36:57.831981    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567563    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b9fbd177058ba298cde2a92c4ef5c601-ca-certs\") pod \"kube-apiserver-multinode-659000\" (UID: \"b9fbd177058ba298cde2a92c4ef5c601\") " pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:57.832030    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567580    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-ca-certs\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:57.832143    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567687    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-flexvolume-dir\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:57.832191    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567720    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4a14d0700eafa36dd3913955f2c0f839-k8s-certs\") pod \"kube-controller-manager-multinode-659000\" (UID: \"4a14d0700eafa36dd3913955f2c0f839\") " pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:57.832302    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567745    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b9fbd177058ba298cde2a92c4ef5c601-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-659000\" (UID: \"b9fbd177058ba298cde2a92c4ef5c601\") " pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:57.832302    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.567166    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51ee4649b24aa281b3767c049c3c1d4063e516b98501648152da39ee45cb0b26"
	I0127 12:36:57.832302    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.569350    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.832302    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.570289    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.832302    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: I0127 12:35:36.681872    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:57.832302    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.682569    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.198.106:8443: connect: connection refused" node="multinode-659000"
	I0127 12:36:57.832302    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 kubelet[1648]: E0127 12:35:36.878668    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-659000?timeout=10s\": dial tcp 172.29.198.106:8443: connect: connection refused" interval="800ms"
	I0127 12:36:57.832302    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: W0127 12:35:37.056372    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:57.832302    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.056534    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:57.832302    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: I0127 12:35:37.084276    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:57.832302    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.085344    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.198.106:8443: connect: connection refused" node="multinode-659000"
	I0127 12:36:57.832302    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: W0127 12:35:37.281985    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:57.832850    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.282078    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:57.832975    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: W0127 12:35:37.629266    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-659000&limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:57.833026    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.629409    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%3Dmultinode-659000&limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:57.833157    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: W0127 12:35:37.673700    1648 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.29.198.106:8443: connect: connection refused
	I0127 12:36:57.833205    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.673876    1648 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.29.198.106:8443: connect: connection refused" logger="UnhandledError"
	I0127 12:36:57.833298    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.680515    1648 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-659000?timeout=10s\": dial tcp 172.29.198.106:8443: connect: connection refused" interval="1.6s"
	I0127 12:36:57.833342    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: I0127 12:35:37.887498    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:57.833389    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 kubelet[1648]: E0127 12:35:37.888458    1648 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.29.198.106:8443: connect: connection refused" node="multinode-659000"
	I0127 12:36:57.833436    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: E0127 12:35:39.058364    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.833484    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: E0127 12:35:39.084210    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.833575    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: E0127 12:35:39.099659    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.833659    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: E0127 12:35:39.112572    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:39 multinode-659000 kubelet[1648]: I0127 12:35:39.489967    1648 kubelet_node_status.go:76] "Attempting to register node" node="multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:40 multinode-659000 kubelet[1648]: E0127 12:35:40.123734    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:40 multinode-659000 kubelet[1648]: E0127 12:35:40.124212    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:40 multinode-659000 kubelet[1648]: E0127 12:35:40.124507    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:40 multinode-659000 kubelet[1648]: E0127 12:35:40.124790    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.138584    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.139346    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.139719    1648 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"multinode-659000\" not found" node="multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.469180    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.513020    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-multinode-659000\" already exists" pod="kube-system/etcd-multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.513064    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.538800    1648 kubelet_node_status.go:125] "Node was previously registered" node="multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.538905    1648 kubelet_node_status.go:79] "Successfully registered node" node="multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.538949    1648 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.539897    1648 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.540655    1648 setters.go:602] "Node became not ready" node="multinode-659000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-01-27T12:35:41Z","lastTransitionTime":"2025-01-27T12:35:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.555833    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-multinode-659000\" already exists" pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.555924    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.574323    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-multinode-659000\" already exists" pod="kube-system/kube-controller-manager-multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: I0127 12:35:41.574484    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 kubelet[1648]: E0127 12:35:41.589698    1648 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-multinode-659000\" already exists" pod="kube-system/kube-scheduler-multinode-659000"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.247993    1648 apiserver.go:52] "Watching apiserver"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.255092    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-659000" podUID="f19e9efc-57cc-4e2a-b365-920592a7f352"
	I0127 12:36:57.833708    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.257281    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.834245    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.257504    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.834292    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.261197    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/etcd-multinode-659000" podUID="d2a9c448-86a1-48e3-8b48-345c937e5bb4"
	I0127 12:36:57.834340    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.277187    1648 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0127 12:36:57.834387    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.304401    1648 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/etcd-multinode-659000"
	I0127 12:36:57.834434    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.304607    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-multinode-659000"
	I0127 12:36:57.834479    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.309849    1648 kubelet.go:3189] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:57.834526    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.309963    1648 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-multinode-659000"
	I0127 12:36:57.834578    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.343249    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae3b8daf-d674-4cfe-8652-cb5ff6ba8615-lib-modules\") pod \"kube-proxy-s46mv\" (UID: \"ae3b8daf-d674-4cfe-8652-cb5ff6ba8615\") " pod="kube-system/kube-proxy-s46mv"
	I0127 12:36:57.834668    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.343617    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9b617a9c-e2b8-45fd-bee2-45cb03d4cd42-cni-cfg\") pod \"kindnet-z2hqq\" (UID: \"9b617a9c-e2b8-45fd-bee2-45cb03d4cd42\") " pod="kube-system/kindnet-z2hqq"
	I0127 12:36:57.834712    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.343779    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b617a9c-e2b8-45fd-bee2-45cb03d4cd42-lib-modules\") pod \"kindnet-z2hqq\" (UID: \"9b617a9c-e2b8-45fd-bee2-45cb03d4cd42\") " pod="kube-system/kindnet-z2hqq"
	I0127 12:36:57.834801    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.343961    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae3b8daf-d674-4cfe-8652-cb5ff6ba8615-xtables-lock\") pod \"kube-proxy-s46mv\" (UID: \"ae3b8daf-d674-4cfe-8652-cb5ff6ba8615\") " pod="kube-system/kube-proxy-s46mv"
	I0127 12:36:57.834844    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.344263    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b617a9c-e2b8-45fd-bee2-45cb03d4cd42-xtables-lock\") pod \"kindnet-z2hqq\" (UID: \"9b617a9c-e2b8-45fd-bee2-45cb03d4cd42\") " pod="kube-system/kindnet-z2hqq"
	I0127 12:36:57.834930    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.344443    1648 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bcfd7913-1bc0-4c24-882f-2be92ec9b046-tmp\") pod \"storage-provisioner\" (UID: \"bcfd7913-1bc0-4c24-882f-2be92ec9b046\") " pod="kube-system/storage-provisioner"
	I0127 12:36:57.834974    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.345456    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:57.835080    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.345573    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:42.845554363 +0000 UTC m=+6.750229019 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:57.835080    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.362165    1648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bf31ca1befb4fb3e8f2fd27458a3b80" path="/var/lib/kubelet/pods/6bf31ca1befb4fb3e8f2fd27458a3b80/volumes"
	I0127 12:36:57.835194    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.363294    1648 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7291ea72d8be6e47ed8b536906d73549" path="/var/lib/kubelet/pods/7291ea72d8be6e47ed8b536906d73549/volumes"
	I0127 12:36:57.835243    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.396667    1648 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	I0127 12:36:57.835336    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.400478    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.835380    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.400505    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.835487    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.400550    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:42.900534148 +0000 UTC m=+6.805208804 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.835606    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.494698    1648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-659000" podStartSLOduration=0.494540064 podStartE2EDuration="494.540064ms" podCreationTimestamp="2025-01-27 12:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 12:35:42.473709794 +0000 UTC m=+6.378384350" watchObservedRunningTime="2025-01-27 12:35:42.494540064 +0000 UTC m=+6.399214620"
	I0127 12:36:57.835719    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: I0127 12:35:42.494964    1648 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-659000" podStartSLOduration=0.494955765 podStartE2EDuration="494.955765ms" podCreationTimestamp="2025-01-27 12:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-27 12:35:42.493805361 +0000 UTC m=+6.398480017" watchObservedRunningTime="2025-01-27 12:35:42.494955765 +0000 UTC m=+6.399630321"
	I0127 12:36:57.835813    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.849608    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:57.835908    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.849827    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:43.849803559 +0000 UTC m=+7.754478115 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:57.835958    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.951539    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.836004    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.951579    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.836124    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 kubelet[1648]: E0127 12:35:42.951637    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:43.951620201 +0000 UTC m=+7.856294757 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.836177    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: I0127 12:35:43.230846    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b613e9a7a356580fd5381e358408317fd6120a119c23f3f196adda302e5ca97f"
	I0127 12:36:57.836227    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: I0127 12:35:43.240666    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34d579bb511fec290478f20b13002063b43c1a71bd6f2f45f1d83bbd8ac971ab"
	I0127 12:36:57.836279    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.588436    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.836377    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: I0127 12:35:43.594121    1648 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d43e4cc62e0877d4b65191623d58195cd33c60eff33c6e49e605f69620d5115f"
	I0127 12:36:57.836425    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: I0127 12:35:43.594816    1648 kubelet.go:3183] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-659000" podUID="f19e9efc-57cc-4e2a-b365-920592a7f352"
	I0127 12:36:57.836493    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.861607    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:57.836605    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.861754    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:45.861734662 +0000 UTC m=+9.766409318 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:57.836651    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.962791    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.836701    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.962845    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.836794    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 kubelet[1648]: E0127 12:35:43.963033    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:45.962955102 +0000 UTC m=+9.867629758 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.836886    9948 command_runner.go:130] > Jan 27 12:35:44 multinode-659000 kubelet[1648]: E0127 12:35:44.356390    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.836949    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.355639    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.836997    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.883867    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:57.837234    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.883991    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:49.883972962 +0000 UTC m=+13.788647618 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:57.837295    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.984260    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.837295    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.984313    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.837295    9948 command_runner.go:130] > Jan 27 12:35:45 multinode-659000 kubelet[1648]: E0127 12:35:45.984377    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:49.984359299 +0000 UTC m=+13.889033855 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.837295    9948 command_runner.go:130] > Jan 27 12:35:46 multinode-659000 kubelet[1648]: E0127 12:35:46.358731    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.837295    9948 command_runner.go:130] > Jan 27 12:35:46 multinode-659000 kubelet[1648]: E0127 12:35:46.386967    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:57.837295    9948 command_runner.go:130] > Jan 27 12:35:47 multinode-659000 kubelet[1648]: E0127 12:35:47.355582    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.837295    9948 command_runner.go:130] > Jan 27 12:35:48 multinode-659000 kubelet[1648]: E0127 12:35:48.356308    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.837295    9948 command_runner.go:130] > Jan 27 12:35:49 multinode-659000 kubelet[1648]: E0127 12:35:49.356027    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.837295    9948 command_runner.go:130] > Jan 27 12:35:49 multinode-659000 kubelet[1648]: E0127 12:35:49.925365    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:57.837295    9948 command_runner.go:130] > Jan 27 12:35:49 multinode-659000 kubelet[1648]: E0127 12:35:49.925459    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:57.925443152 +0000 UTC m=+21.830117808 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:57.837295    9948 command_runner.go:130] > Jan 27 12:35:50 multinode-659000 kubelet[1648]: E0127 12:35:50.027100    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.837295    9948 command_runner.go:130] > Jan 27 12:35:50 multinode-659000 kubelet[1648]: E0127 12:35:50.027219    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.838030    9948 command_runner.go:130] > Jan 27 12:35:50 multinode-659000 kubelet[1648]: E0127 12:35:50.027346    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:35:58.027289813 +0000 UTC m=+21.931964469 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.838141    9948 command_runner.go:130] > Jan 27 12:35:50 multinode-659000 kubelet[1648]: E0127 12:35:50.355319    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.838191    9948 command_runner.go:130] > Jan 27 12:35:51 multinode-659000 kubelet[1648]: E0127 12:35:51.356503    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.838290    9948 command_runner.go:130] > Jan 27 12:35:51 multinode-659000 kubelet[1648]: E0127 12:35:51.388594    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:57.838358    9948 command_runner.go:130] > Jan 27 12:35:52 multinode-659000 kubelet[1648]: E0127 12:35:52.357390    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.838477    9948 command_runner.go:130] > Jan 27 12:35:53 multinode-659000 kubelet[1648]: E0127 12:35:53.355568    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.838605    9948 command_runner.go:130] > Jan 27 12:35:54 multinode-659000 kubelet[1648]: E0127 12:35:54.355531    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.838657    9948 command_runner.go:130] > Jan 27 12:35:55 multinode-659000 kubelet[1648]: E0127 12:35:55.356228    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.838787    9948 command_runner.go:130] > Jan 27 12:35:56 multinode-659000 kubelet[1648]: E0127 12:35:56.355726    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.838841    9948 command_runner.go:130] > Jan 27 12:35:56 multinode-659000 kubelet[1648]: E0127 12:35:56.392446    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:57.838902    9948 command_runner.go:130] > Jan 27 12:35:57 multinode-659000 kubelet[1648]: E0127 12:35:57.355790    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.838965    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.001233    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:57.839117    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.001401    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:36:14.001383565 +0000 UTC m=+37.906058121 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:57.839164    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.101493    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.839233    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.101659    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.839300    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.101748    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:36:14.101732786 +0000 UTC m=+38.006407342 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.839411    9948 command_runner.go:130] > Jan 27 12:35:58 multinode-659000 kubelet[1648]: E0127 12:35:58.365026    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.839463    9948 command_runner.go:130] > Jan 27 12:35:59 multinode-659000 kubelet[1648]: E0127 12:35:59.356031    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.839523    9948 command_runner.go:130] > Jan 27 12:36:00 multinode-659000 kubelet[1648]: E0127 12:36:00.356282    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.839523    9948 command_runner.go:130] > Jan 27 12:36:01 multinode-659000 kubelet[1648]: E0127 12:36:01.356209    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.839523    9948 command_runner.go:130] > Jan 27 12:36:01 multinode-659000 kubelet[1648]: E0127 12:36:01.394292    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:57.839523    9948 command_runner.go:130] > Jan 27 12:36:02 multinode-659000 kubelet[1648]: E0127 12:36:02.355777    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.839523    9948 command_runner.go:130] > Jan 27 12:36:03 multinode-659000 kubelet[1648]: E0127 12:36:03.356166    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.839523    9948 command_runner.go:130] > Jan 27 12:36:04 multinode-659000 kubelet[1648]: E0127 12:36:04.356089    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.839523    9948 command_runner.go:130] > Jan 27 12:36:05 multinode-659000 kubelet[1648]: E0127 12:36:05.355458    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.839523    9948 command_runner.go:130] > Jan 27 12:36:06 multinode-659000 kubelet[1648]: E0127 12:36:06.356120    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.839523    9948 command_runner.go:130] > Jan 27 12:36:06 multinode-659000 kubelet[1648]: E0127 12:36:06.396811    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:57.839523    9948 command_runner.go:130] > Jan 27 12:36:07 multinode-659000 kubelet[1648]: E0127 12:36:07.355573    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.839523    9948 command_runner.go:130] > Jan 27 12:36:08 multinode-659000 kubelet[1648]: E0127 12:36:08.355837    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.839523    9948 command_runner.go:130] > Jan 27 12:36:09 multinode-659000 kubelet[1648]: E0127 12:36:09.355284    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.840061    9948 command_runner.go:130] > Jan 27 12:36:10 multinode-659000 kubelet[1648]: E0127 12:36:10.356199    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.840108    9948 command_runner.go:130] > Jan 27 12:36:11 multinode-659000 kubelet[1648]: E0127 12:36:11.356023    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.840108    9948 command_runner.go:130] > Jan 27 12:36:11 multinode-659000 kubelet[1648]: E0127 12:36:11.398054    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:12 multinode-659000 kubelet[1648]: E0127 12:36:12.355492    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:13 multinode-659000 kubelet[1648]: E0127 12:36:13.356291    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.058689    1648 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.058911    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume podName:8f0367fc-d842-4cc3-8e71-30869a548243 nodeName:}" failed. No retries permitted until 2025-01-27 12:36:46.058858304 +0000 UTC m=+69.963532860 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8f0367fc-d842-4cc3-8e71-30869a548243-config-volume") pod "coredns-668d6bf9bc-2qw6w" (UID: "8f0367fc-d842-4cc3-8e71-30869a548243") : object "kube-system"/"coredns" not registered
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.159091    1648 projected.go:288] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.159277    1648 projected.go:194] Error preparing data for projected volume kube-api-access-qpzlq for pod default/busybox-58667487b6-2jq9j: object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.159495    1648 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq podName:244fa7e9-f6c4-46a7-b61f-8717e13fd270 nodeName:}" failed. No retries permitted until 2025-01-27 12:36:46.15947175 +0000 UTC m=+70.064146406 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-qpzlq" (UniqueName: "kubernetes.io/projected/244fa7e9-f6c4-46a7-b61f-8717e13fd270-kube-api-access-qpzlq") pod "busybox-58667487b6-2jq9j" (UID: "244fa7e9-f6c4-46a7-b61f-8717e13fd270") : object "default"/"kube-root-ca.crt" not registered
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 kubelet[1648]: E0127 12:36:14.357000    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:15 multinode-659000 kubelet[1648]: I0127 12:36:15.031682    1648 scope.go:117] "RemoveContainer" containerID="134620caeeb93fda5b32a71962e13d1994830a35b93b18ad2387296500dff7b5"
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:15 multinode-659000 kubelet[1648]: I0127 12:36:15.032024    1648 scope.go:117] "RemoveContainer" containerID="9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f"
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:15 multinode-659000 kubelet[1648]: E0127 12:36:15.032236    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(bcfd7913-1bc0-4c24-882f-2be92ec9b046)\"" pod="kube-system/storage-provisioner" podUID="bcfd7913-1bc0-4c24-882f-2be92ec9b046"
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:15 multinode-659000 kubelet[1648]: E0127 12:36:15.355738    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:16 multinode-659000 kubelet[1648]: E0127 12:36:16.356191    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:16 multinode-659000 kubelet[1648]: E0127 12:36:16.399212    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:17 multinode-659000 kubelet[1648]: E0127 12:36:17.355082    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:18 multinode-659000 kubelet[1648]: E0127 12:36:18.356067    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.840206    9948 command_runner.go:130] > Jan 27 12:36:19 multinode-659000 kubelet[1648]: E0127 12:36:19.355675    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.840742    9948 command_runner.go:130] > Jan 27 12:36:20 multinode-659000 kubelet[1648]: E0127 12:36:20.356455    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.840790    9948 command_runner.go:130] > Jan 27 12:36:21 multinode-659000 kubelet[1648]: E0127 12:36:21.355971    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.840790    9948 command_runner.go:130] > Jan 27 12:36:21 multinode-659000 kubelet[1648]: E0127 12:36:21.401078    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:22 multinode-659000 kubelet[1648]: E0127 12:36:22.355954    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:23 multinode-659000 kubelet[1648]: E0127 12:36:23.355387    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:24 multinode-659000 kubelet[1648]: E0127 12:36:24.355437    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:25 multinode-659000 kubelet[1648]: E0127 12:36:25.356289    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:26 multinode-659000 kubelet[1648]: E0127 12:36:26.356493    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:26 multinode-659000 kubelet[1648]: E0127 12:36:26.402364    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 kubelet[1648]: E0127 12:36:27.356407    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 kubelet[1648]: I0127 12:36:27.357050    1648 scope.go:117] "RemoveContainer" containerID="9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:28 multinode-659000 kubelet[1648]: E0127 12:36:28.356371    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:29 multinode-659000 kubelet[1648]: E0127 12:36:29.355555    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:30 multinode-659000 kubelet[1648]: E0127 12:36:30.356227    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:31 multinode-659000 kubelet[1648]: E0127 12:36:31.356043    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]: I0127 12:36:36.363314    1648 scope.go:117] "RemoveContainer" containerID="5f274e5a8851d2aeb5403952c3fba0274fe53614e2e0995d1046693d7e725d5d"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]: E0127 12:36:36.393311    1648 iptables.go:577] "Could not set up iptables canary" err=<
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0127 12:36:57.840886    9948 command_runner.go:130] > Jan 27 12:36:36 multinode-659000 kubelet[1648]: I0127 12:36:36.409087    1648 scope.go:117] "RemoveContainer" containerID="f91e9c2d3ba64a6d34c9bab7c1953b46f4006e0bb493bd1ae993c489cd76e02c"
	I0127 12:36:57.886306    9948 logs.go:123] Gathering logs for kube-apiserver [ea993630a310] ...
	I0127 12:36:57.886306    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ea993630a310"
	I0127 12:36:57.916515    9948 command_runner.go:130] ! W0127 12:35:38.851605       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0127 12:36:57.916584    9948 command_runner.go:130] ! I0127 12:35:38.853397       1 options.go:238] external host was not specified, using 172.29.198.106
	I0127 12:36:57.916584    9948 command_runner.go:130] ! I0127 12:35:38.858160       1 server.go:143] Version: v1.32.1
	I0127 12:36:57.916584    9948 command_runner.go:130] ! I0127 12:35:38.858493       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:57.916584    9948 command_runner.go:130] ! I0127 12:35:39.798695       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0127 12:36:57.916584    9948 command_runner.go:130] ! I0127 12:35:39.843688       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0127 12:36:57.916711    9948 command_runner.go:130] ! I0127 12:35:39.853521       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:39.853736       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:39.854572       1 instance.go:233] Using reconciler: lease
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:39.914509       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:39.914792       1 genericapiserver.go:767] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.232206       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.232893       1 apis.go:106] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.488401       1 apis.go:106] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.610998       1 apis.go:106] API group "resource.k8s.io" is not enabled, skipping.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.646097       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.646401       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.646556       1 genericapiserver.go:767] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.647499       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.647580       1 genericapiserver.go:767] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.648520       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.649666       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.649756       1 genericapiserver.go:767] Skipping API autoscaling/v2beta1 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.649766       1 genericapiserver.go:767] Skipping API autoscaling/v2beta2 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.651998       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.652100       1 genericapiserver.go:767] Skipping API batch/v1beta1 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.653327       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.653629       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.653645       1 genericapiserver.go:767] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.654270       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.654362       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.654371       1 genericapiserver.go:767] Skipping API coordination.k8s.io/v1alpha2 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.655349       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.655494       1 genericapiserver.go:767] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.657969       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.658067       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.658077       1 genericapiserver.go:767] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.658845       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.658940       1 genericapiserver.go:767] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! W0127 12:35:40.658951       1 genericapiserver.go:767] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:57.916852    9948 command_runner.go:130] ! I0127 12:35:40.660043       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0127 12:36:57.917430    9948 command_runner.go:130] ! W0127 12:35:40.660172       1 genericapiserver.go:767] Skipping API policy/v1beta1 because it has no resources.
	I0127 12:36:57.917430    9948 command_runner.go:130] ! I0127 12:35:40.662431       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0127 12:36:57.917430    9948 command_runner.go:130] ! W0127 12:35:40.662519       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.917430    9948 command_runner.go:130] ! W0127 12:35:40.662531       1 genericapiserver.go:767] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:57.917430    9948 command_runner.go:130] ! I0127 12:35:40.663022       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0127 12:36:57.917430    9948 command_runner.go:130] ! W0127 12:35:40.663153       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.917430    9948 command_runner.go:130] ! W0127 12:35:40.663165       1 genericapiserver.go:767] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:57.917430    9948 command_runner.go:130] ! I0127 12:35:40.666344       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0127 12:36:57.917629    9948 command_runner.go:130] ! W0127 12:35:40.666495       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.917629    9948 command_runner.go:130] ! W0127 12:35:40.666521       1 genericapiserver.go:767] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:57.917660    9948 command_runner.go:130] ! I0127 12:35:40.668345       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0127 12:36:57.917660    9948 command_runner.go:130] ! W0127 12:35:40.668516       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta3 because it has no resources.
	I0127 12:36:57.917708    9948 command_runner.go:130] ! W0127 12:35:40.668527       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0127 12:36:57.917739    9948 command_runner.go:130] ! W0127 12:35:40.668531       1 genericapiserver.go:767] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.917739    9948 command_runner.go:130] ! I0127 12:35:40.673502       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0127 12:36:57.917767    9948 command_runner.go:130] ! W0127 12:35:40.673587       1 genericapiserver.go:767] Skipping API apps/v1beta2 because it has no resources.
	I0127 12:36:57.917767    9948 command_runner.go:130] ! W0127 12:35:40.673597       1 genericapiserver.go:767] Skipping API apps/v1beta1 because it has no resources.
	I0127 12:36:57.917841    9948 command_runner.go:130] ! I0127 12:35:40.676193       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0127 12:36:57.917841    9948 command_runner.go:130] ! W0127 12:35:40.676284       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.917841    9948 command_runner.go:130] ! W0127 12:35:40.676294       1 genericapiserver.go:767] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0127 12:36:57.917841    9948 command_runner.go:130] ! I0127 12:35:40.677186       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0127 12:36:57.917841    9948 command_runner.go:130] ! W0127 12:35:40.677276       1 genericapiserver.go:767] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.917940    9948 command_runner.go:130] ! I0127 12:35:40.688978       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0127 12:36:57.917940    9948 command_runner.go:130] ! W0127 12:35:40.689072       1 genericapiserver.go:767] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0127 12:36:57.917940    9948 command_runner.go:130] ! I0127 12:35:41.320439       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 12:36:57.918016    9948 command_runner.go:130] ! I0127 12:35:41.320849       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:57.918016    9948 command_runner.go:130] ! I0127 12:35:41.321234       1 secure_serving.go:213] Serving securely on [::]:8443
	I0127 12:36:57.918016    9948 command_runner.go:130] ! I0127 12:35:41.321512       1 dynamic_serving_content.go:135] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0127 12:36:57.918106    9948 command_runner.go:130] ! I0127 12:35:41.324372       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:57.918106    9948 command_runner.go:130] ! I0127 12:35:41.325924       1 controller.go:119] Starting legacy_token_tracking_controller
	I0127 12:36:57.918106    9948 command_runner.go:130] ! I0127 12:35:41.326193       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0127 12:36:57.918106    9948 command_runner.go:130] ! I0127 12:35:41.327573       1 remote_available_controller.go:411] Starting RemoteAvailability controller
	I0127 12:36:57.918180    9948 command_runner.go:130] ! I0127 12:35:41.328217       1 cache.go:32] Waiting for caches to sync for RemoteAvailability controller
	I0127 12:36:57.918180    9948 command_runner.go:130] ! I0127 12:35:41.328319       1 system_namespaces_controller.go:66] Starting system namespaces controller
	I0127 12:36:57.918180    9948 command_runner.go:130] ! I0127 12:35:41.329060       1 cluster_authentication_trust_controller.go:462] Starting cluster_authentication_trust_controller controller
	I0127 12:36:57.918180    9948 command_runner.go:130] ! I0127 12:35:41.329095       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0127 12:36:57.918180    9948 command_runner.go:130] ! I0127 12:35:41.329225       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0127 12:36:57.918180    9948 command_runner.go:130] ! I0127 12:35:41.329996       1 controller.go:78] Starting OpenAPI AggregationController
	I0127 12:36:57.918264    9948 command_runner.go:130] ! I0127 12:35:41.330057       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0127 12:36:57.918264    9948 command_runner.go:130] ! I0127 12:35:41.330085       1 apf_controller.go:377] Starting API Priority and Fairness config controller
	I0127 12:36:57.918264    9948 command_runner.go:130] ! I0127 12:35:41.330333       1 apiservice_controller.go:100] Starting APIServiceRegistrationController
	I0127 12:36:57.918264    9948 command_runner.go:130] ! I0127 12:35:41.330379       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0127 12:36:57.918264    9948 command_runner.go:130] ! I0127 12:35:41.331391       1 customresource_discovery_controller.go:292] Starting DiscoveryController
	I0127 12:36:57.918387    9948 command_runner.go:130] ! I0127 12:35:41.331485       1 dynamic_serving_content.go:135] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0127 12:36:57.918719    9948 command_runner.go:130] ! I0127 12:35:41.327929       1 local_available_controller.go:156] Starting LocalAvailability controller
	I0127 12:36:57.918719    9948 command_runner.go:130] ! I0127 12:35:41.333671       1 cache.go:32] Waiting for caches to sync for LocalAvailability controller
	I0127 12:36:57.918719    9948 command_runner.go:130] ! I0127 12:35:41.333703       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:57.918719    9948 command_runner.go:130] ! I0127 12:35:41.333958       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 12:36:57.918799    9948 command_runner.go:130] ! I0127 12:35:41.335863       1 controller.go:142] Starting OpenAPI controller
	I0127 12:36:57.918799    9948 command_runner.go:130] ! I0127 12:35:41.336704       1 controller.go:90] Starting OpenAPI V3 controller
	I0127 12:36:57.918870    9948 command_runner.go:130] ! I0127 12:35:41.336831       1 naming_controller.go:294] Starting NamingConditionController
	I0127 12:36:57.918870    9948 command_runner.go:130] ! I0127 12:35:41.337057       1 establishing_controller.go:81] Starting EstablishingController
	I0127 12:36:57.918870    9948 command_runner.go:130] ! I0127 12:35:41.337215       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0127 12:36:57.918870    9948 command_runner.go:130] ! I0127 12:35:41.337324       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0127 12:36:57.918870    9948 command_runner.go:130] ! I0127 12:35:41.337408       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0127 12:36:57.918939    9948 command_runner.go:130] ! I0127 12:35:41.327968       1 aggregator.go:169] waiting for initial CRD sync...
	I0127 12:36:57.918939    9948 command_runner.go:130] ! I0127 12:35:41.387084       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0127 12:36:57.918939    9948 command_runner.go:130] ! I0127 12:35:41.387441       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0127 12:36:57.919011    9948 command_runner.go:130] ! I0127 12:35:41.450926       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0127 12:36:57.919011    9948 command_runner.go:130] ! I0127 12:35:41.451366       1 policy_source.go:240] refreshing policies
	I0127 12:36:57.919011    9948 command_runner.go:130] ! I0127 12:35:41.488750       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0127 12:36:57.919011    9948 command_runner.go:130] ! I0127 12:35:41.488990       1 aggregator.go:171] initial CRD sync complete...
	I0127 12:36:57.919011    9948 command_runner.go:130] ! I0127 12:35:41.489245       1 autoregister_controller.go:144] Starting autoregister controller
	I0127 12:36:57.919121    9948 command_runner.go:130] ! I0127 12:35:41.489480       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0127 12:36:57.919121    9948 command_runner.go:130] ! I0127 12:35:41.489653       1 cache.go:39] Caches are synced for autoregister controller
	I0127 12:36:57.919121    9948 command_runner.go:130] ! I0127 12:35:41.499151       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0127 12:36:57.919121    9948 command_runner.go:130] ! I0127 12:35:41.527390       1 shared_informer.go:320] Caches are synced for configmaps
	I0127 12:36:57.919200    9948 command_runner.go:130] ! I0127 12:35:41.528625       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0127 12:36:57.919200    9948 command_runner.go:130] ! I0127 12:35:41.529892       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0127 12:36:57.919200    9948 command_runner.go:130] ! I0127 12:35:41.530639       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0127 12:36:57.919200    9948 command_runner.go:130] ! I0127 12:35:41.531604       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0127 12:36:57.919200    9948 command_runner.go:130] ! I0127 12:35:41.531638       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0127 12:36:57.919271    9948 command_runner.go:130] ! I0127 12:35:41.534721       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0127 12:36:57.919271    9948 command_runner.go:130] ! I0127 12:35:41.540933       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0127 12:36:57.919271    9948 command_runner.go:130] ! I0127 12:35:41.545944       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0127 12:36:57.919271    9948 command_runner.go:130] ! I0127 12:35:42.357869       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0127 12:36:57.919341    9948 command_runner.go:130] ! I0127 12:35:42.374307       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0127 12:36:57.919341    9948 command_runner.go:130] ! W0127 12:35:43.074223       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.29.198.106]
	I0127 12:36:57.919341    9948 command_runner.go:130] ! I0127 12:35:43.075938       1 controller.go:615] quota admission added evaluator for: endpoints
	I0127 12:36:57.919341    9948 command_runner.go:130] ! I0127 12:35:43.085006       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0127 12:36:57.919341    9948 command_runner.go:130] ! I0127 12:35:44.603084       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0127 12:36:57.919431    9948 command_runner.go:130] ! I0127 12:35:44.989601       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0127 12:36:57.919431    9948 command_runner.go:130] ! I0127 12:35:45.141450       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0127 12:36:57.919431    9948 command_runner.go:130] ! I0127 12:35:45.327075       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 12:36:57.919431    9948 command_runner.go:130] ! I0127 12:35:45.338333       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0127 12:36:57.926639    9948 logs.go:123] Gathering logs for kube-proxy [0283b35dee3c] ...
	I0127 12:36:57.926639    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0283b35dee3c"
	I0127 12:36:57.949131    9948 command_runner.go:130] ! I0127 12:35:44.449716       1 server_linux.go:66] "Using iptables proxy"
	I0127 12:36:57.949131    9948 command_runner.go:130] ! E0127 12:35:44.569403       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0127 12:36:57.950036    9948 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0127 12:36:57.950036    9948 command_runner.go:130] ! 	add table ip kube-proxy
	I0127 12:36:57.950036    9948 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:57.950036    9948 command_runner.go:130] !  >
	I0127 12:36:57.950036    9948 command_runner.go:130] ! E0127 12:35:44.599245       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0127 12:36:57.950036    9948 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0127 12:36:57.950036    9948 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0127 12:36:57.950036    9948 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:57.950184    9948 command_runner.go:130] !  >
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:44.767652       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.198.106"]
	I0127 12:36:57.950184    9948 command_runner.go:130] ! E0127 12:35:44.770299       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.038438       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.038556       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.038587       1 server_linux.go:170] "Using iptables Proxier"
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.043111       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.045042       1 server.go:497] "Version info" version="v1.32.1"
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.045375       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.053262       1 config.go:199] "Starting service config controller"
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.054808       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.054873       1 config.go:329] "Starting node config controller"
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.054880       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.058308       1 config.go:105] "Starting endpoint slice config controller"
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.058492       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.155116       1 shared_informer.go:320] Caches are synced for node config
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.155116       1 shared_informer.go:320] Caches are synced for service config
	I0127 12:36:57.950184    9948 command_runner.go:130] ! I0127 12:35:45.159566       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 12:36:57.953121    9948 logs.go:123] Gathering logs for dmesg ...
	I0127 12:36:57.953121    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0127 12:36:57.974705    9948 command_runner.go:130] > [Jan27 12:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0127 12:36:57.974808    9948 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0127 12:36:57.974808    9948 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0127 12:36:57.974808    9948 command_runner.go:130] > [  +0.124628] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0127 12:36:57.974895    9948 command_runner.go:130] > [  +0.022511] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0127 12:36:57.974922    9948 command_runner.go:130] > [  +0.000000] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.000000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.069272] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.020914] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0127 12:36:57.974952    9948 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0127 12:36:57.974952    9948 command_runner.go:130] > [Jan27 12:34] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.706235] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +1.791193] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +6.780102] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	I0127 12:36:57.974952    9948 command_runner.go:130] > [Jan27 12:35] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.194598] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	I0127 12:36:57.974952    9948 command_runner.go:130] > [ +25.881577] systemd-fstab-generator[1029]: Ignoring "noauto" option for root device
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.104839] kauditd_printk_skb: 75 callbacks suppressed
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.497850] systemd-fstab-generator[1069]: Ignoring "noauto" option for root device
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.189754] systemd-fstab-generator[1081]: Ignoring "noauto" option for root device
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.209865] systemd-fstab-generator[1095]: Ignoring "noauto" option for root device
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +2.995294] systemd-fstab-generator[1337]: Ignoring "noauto" option for root device
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.193187] systemd-fstab-generator[1349]: Ignoring "noauto" option for root device
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.167597] systemd-fstab-generator[1361]: Ignoring "noauto" option for root device
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.247752] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.858687] systemd-fstab-generator[1500]: Ignoring "noauto" option for root device
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +0.090112] kauditd_printk_skb: 206 callbacks suppressed
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +3.380441] systemd-fstab-generator[1641]: Ignoring "noauto" option for root device
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +1.786352] kauditd_printk_skb: 64 callbacks suppressed
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +5.236723] kauditd_printk_skb: 10 callbacks suppressed
	I0127 12:36:57.974952    9948 command_runner.go:130] > [  +4.105586] systemd-fstab-generator[2522]: Ignoring "noauto" option for root device
	I0127 12:36:57.974952    9948 command_runner.go:130] > [Jan27 12:36] kauditd_printk_skb: 70 callbacks suppressed
	I0127 12:36:57.977088    9948 logs.go:123] Gathering logs for coredns [b3a9ed6e130c] ...
	I0127 12:36:57.977088    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b3a9ed6e130c"
	I0127 12:36:58.003868    9948 command_runner.go:130] > .:53
	I0127 12:36:58.003868    9948 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 5e2e325279dfa828a8fd1b44d83ab4703abb0247d4beadde42157147650fe687c0862eaa4caa15a5d9139c48c9a9dd5ec3cd962ba60368e8ffb4d02ae4d29aeb
	I0127 12:36:58.003868    9948 command_runner.go:130] > CoreDNS-1.11.3
	I0127 12:36:58.003868    9948 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0127 12:36:58.003868    9948 command_runner.go:130] > [INFO] 127.0.0.1:47464 - 34099 "HINFO IN 5313391549706874198.1206200090770907475. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.062040871s
	I0127 12:36:58.004242    9948 logs.go:123] Gathering logs for coredns [f818dd15d8b0] ...
	I0127 12:36:58.004242    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f818dd15d8b0"
	I0127 12:36:58.032110    9948 command_runner.go:130] > .:53
	I0127 12:36:58.032110    9948 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = 5e2e325279dfa828a8fd1b44d83ab4703abb0247d4beadde42157147650fe687c0862eaa4caa15a5d9139c48c9a9dd5ec3cd962ba60368e8ffb4d02ae4d29aeb
	I0127 12:36:58.032110    9948 command_runner.go:130] > CoreDNS-1.11.3
	I0127 12:36:58.032110    9948 command_runner.go:130] > linux/amd64, go1.21.11, a6338e9
	I0127 12:36:58.032110    9948 command_runner.go:130] > [INFO] 127.0.0.1:50782 - 35950 "HINFO IN 8787717511470146079.8254135695837817311. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.151481959s
	I0127 12:36:58.032110    9948 command_runner.go:130] > [INFO] 10.244.0.3:56186 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000430505s
	I0127 12:36:58.032110    9948 command_runner.go:130] > [INFO] 10.244.0.3:58756 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.126738988s
	I0127 12:36:58.032110    9948 command_runner.go:130] > [INFO] 10.244.0.3:36399 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.053330342s
	I0127 12:36:58.032110    9948 command_runner.go:130] > [INFO] 10.244.0.3:35359 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.140941591s
	I0127 12:36:58.032450    9948 command_runner.go:130] > [INFO] 10.244.1.2:41150 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000220803s
	I0127 12:36:58.032450    9948 command_runner.go:130] > [INFO] 10.244.1.2:57591 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.0000709s
	I0127 12:36:58.032450    9948 command_runner.go:130] > [INFO] 10.244.1.2:45132 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000133002s
	I0127 12:36:58.032450    9948 command_runner.go:130] > [INFO] 10.244.1.2:48593 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.0000728s
	I0127 12:36:58.032565    9948 command_runner.go:130] > [INFO] 10.244.0.3:53274 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000261802s
	I0127 12:36:58.032597    9948 command_runner.go:130] > [INFO] 10.244.0.3:57676 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.069110701s
	I0127 12:36:58.032597    9948 command_runner.go:130] > [INFO] 10.244.0.3:59948 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000178302s
	I0127 12:36:58.032597    9948 command_runner.go:130] > [INFO] 10.244.0.3:39801 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000198802s
	I0127 12:36:58.032658    9948 command_runner.go:130] > [INFO] 10.244.0.3:45673 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.023238636s
	I0127 12:36:58.032714    9948 command_runner.go:130] > [INFO] 10.244.0.3:42840 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154002s
	I0127 12:36:58.032714    9948 command_runner.go:130] > [INFO] 10.244.0.3:43505 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000181002s
	I0127 12:36:58.032714    9948 command_runner.go:130] > [INFO] 10.244.0.3:34935 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092101s
	I0127 12:36:58.032714    9948 command_runner.go:130] > [INFO] 10.244.1.2:54822 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000155102s
	I0127 12:36:58.032826    9948 command_runner.go:130] > [INFO] 10.244.1.2:50877 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000188102s
	I0127 12:36:58.032826    9948 command_runner.go:130] > [INFO] 10.244.1.2:45384 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183802s
	I0127 12:36:58.032826    9948 command_runner.go:130] > [INFO] 10.244.1.2:35073 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000227202s
	I0127 12:36:58.032826    9948 command_runner.go:130] > [INFO] 10.244.1.2:50517 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000061101s
	I0127 12:36:58.032898    9948 command_runner.go:130] > [INFO] 10.244.1.2:37353 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130501s
	I0127 12:36:58.032936    9948 command_runner.go:130] > [INFO] 10.244.1.2:42117 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114301s
	I0127 12:36:58.032936    9948 command_runner.go:130] > [INFO] 10.244.1.2:46171 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060401s
	I0127 12:36:58.032936    9948 command_runner.go:130] > [INFO] 10.244.0.3:55282 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117601s
	I0127 12:36:58.032936    9948 command_runner.go:130] > [INFO] 10.244.0.3:41761 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162301s
	I0127 12:36:58.032936    9948 command_runner.go:130] > [INFO] 10.244.0.3:35358 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000218902s
	I0127 12:36:58.033012    9948 command_runner.go:130] > [INFO] 10.244.0.3:50342 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124402s
	I0127 12:36:58.033012    9948 command_runner.go:130] > [INFO] 10.244.1.2:38159 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159602s
	I0127 12:36:58.033012    9948 command_runner.go:130] > [INFO] 10.244.1.2:37043 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000171002s
	I0127 12:36:58.033012    9948 command_runner.go:130] > [INFO] 10.244.1.2:50762 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000168301s
	I0127 12:36:58.033068    9948 command_runner.go:130] > [INFO] 10.244.1.2:33014 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000603s
	I0127 12:36:58.033089    9948 command_runner.go:130] > [INFO] 10.244.0.3:34941 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134301s
	I0127 12:36:58.033119    9948 command_runner.go:130] > [INFO] 10.244.0.3:60117 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000393904s
	I0127 12:36:58.033119    9948 command_runner.go:130] > [INFO] 10.244.0.3:47506 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000214402s
	I0127 12:36:58.033119    9948 command_runner.go:130] > [INFO] 10.244.0.3:42968 - 5 "PTR IN 1.192.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000443604s
	I0127 12:36:58.033119    9948 command_runner.go:130] > [INFO] 10.244.1.2:52260 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193802s
	I0127 12:36:58.033183    9948 command_runner.go:130] > [INFO] 10.244.1.2:40492 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000310903s
	I0127 12:36:58.033183    9948 command_runner.go:130] > [INFO] 10.244.1.2:50341 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074s
	I0127 12:36:58.033183    9948 command_runner.go:130] > [INFO] 10.244.1.2:41676 - 5 "PTR IN 1.192.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000637s
	I0127 12:36:58.033183    9948 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0127 12:36:58.033281    9948 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0127 12:36:58.035778    9948 logs.go:123] Gathering logs for kube-controller-manager [8d4872cda28d] ...
	I0127 12:36:58.035844    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8d4872cda28d"
	I0127 12:36:58.072970    9948 command_runner.go:130] ! I0127 12:35:39.384985       1 serving.go:386] Generated self-signed cert in-memory
	I0127 12:36:58.072970    9948 command_runner.go:130] ! I0127 12:35:39.805936       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0127 12:36:58.072970    9948 command_runner.go:130] ! I0127 12:35:39.811206       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:58.072970    9948 command_runner.go:130] ! I0127 12:35:39.817632       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0127 12:36:58.072970    9948 command_runner.go:130] ! I0127 12:35:39.822579       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:58.072970    9948 command_runner.go:130] ! I0127 12:35:39.822772       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 12:36:58.072970    9948 command_runner.go:130] ! I0127 12:35:39.823033       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:58.072970    9948 command_runner.go:130] ! I0127 12:35:43.406116       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0127 12:36:58.072970    9948 command_runner.go:130] ! I0127 12:35:43.407249       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0127 12:36:58.072970    9948 command_runner.go:130] ! I0127 12:35:43.417237       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0127 12:36:58.072970    9948 command_runner.go:130] ! I0127 12:35:43.417292       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0127 12:36:58.072970    9948 command_runner.go:130] ! I0127 12:35:43.417300       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0127 12:36:58.073926    9948 command_runner.go:130] ! I0127 12:35:43.417307       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0127 12:36:58.073926    9948 command_runner.go:130] ! I0127 12:35:43.417506       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0127 12:36:58.073926    9948 command_runner.go:130] ! I0127 12:35:43.417534       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0127 12:36:58.073926    9948 command_runner.go:130] ! I0127 12:35:43.417553       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0127 12:36:58.073926    9948 command_runner.go:130] ! I0127 12:35:43.431621       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0127 12:36:58.073926    9948 command_runner.go:130] ! I0127 12:35:43.431964       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0127 12:36:58.073926    9948 command_runner.go:130] ! I0127 12:35:43.431989       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0127 12:36:58.073926    9948 command_runner.go:130] ! I0127 12:35:43.432010       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0127 12:36:58.073926    9948 command_runner.go:130] ! I0127 12:35:43.442961       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0127 12:36:58.073926    9948 command_runner.go:130] ! I0127 12:35:43.447308       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0127 12:36:58.073926    9948 command_runner.go:130] ! I0127 12:35:43.447396       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0127 12:36:58.073926    9948 command_runner.go:130] ! I0127 12:35:43.449412       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0127 12:36:58.073926    9948 command_runner.go:130] ! I0127 12:35:43.449608       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.466583       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.467490       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.467508       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.491988       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.493672       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.493698       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.498557       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.503953       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.503976       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.505729       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.505861       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.505872       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.509718       1 shared_informer.go:320] Caches are synced for tokens
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.510192       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.510208       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.510698       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.510714       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.512896       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.513433       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.513448       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.516433       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0127 12:36:58.074923    9948 command_runner.go:130] ! I0127 12:35:43.516659       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.516671       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.524334       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.524358       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.524545       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.524557       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.534871       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.535028       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.535038       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.557745       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.557975       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.612615       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.612890       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.612906       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.616333       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.627087       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.627107       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.692864       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.692892       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.693095       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.700796       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.703832       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.703867       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.713912       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.714114       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.714094       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.714712       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.714721       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.721904       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.722372       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.723076       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.739709       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.739886       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.739897       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.748074       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.748419       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.748432       1 shared_informer.go:313] Waiting for caches to sync for job
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.774085       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.774108       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.774196       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.814844       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.815383       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.815410       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! W0127 12:35:43.815432       1 shared_informer.go:597] resyncPeriod 17h46m45.188948257s is smaller than resyncCheckPeriod 20h1m58.14772951s and the informer has already started. Changing it to 20h1m58.14772951s
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.815487       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.815503       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.816077       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.816613       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.817053       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.817252       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.817373       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0127 12:36:58.075898    9948 command_runner.go:130] ! I0127 12:35:43.817397       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! W0127 12:35:43.818105       1 shared_informer.go:597] resyncPeriod 12h27m56.377400464s is smaller than resyncCheckPeriod 20h1m58.14772951s and the informer has already started. Changing it to 20h1m58.14772951s
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.818223       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.818270       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.818295       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.818319       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.818336       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.818363       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.818376       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.818392       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.818410       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.818442       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.818764       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.818778       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.819843       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.841955       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.842559       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.842587       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.842995       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.852026       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.852211       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.852253       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.922876       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.923019       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.923033       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.962858       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.962895       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.963021       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:43.963037       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.014798       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.016438       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.016458       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.066881       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.067018       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.067064       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0127 12:36:58.076924    9948 command_runner.go:130] ! W0127 12:35:44.227808       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.236233       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.236429       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.236541       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.236556       1 shared_informer.go:313] Waiting for caches to sync for node
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.261051       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.261341       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.261374       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.314220       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.314319       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.314352       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.364392       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.364625       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.365833       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0127 12:36:58.076924    9948 command_runner.go:130] ! I0127 12:35:44.365937       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.365975       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.365977       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.367697       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.368067       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.368427       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.369763       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.370290       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.370408       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.370568       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.412258       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.412274       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.412282       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.412297       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.412368       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.412379       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.517568       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.517771       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.518074       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.518288       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.564449       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.564546       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.564657       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.591265       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.663628       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.727283       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.739370       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000\" does not exist"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.739797       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m02\" does not exist"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.740184       1 shared_informer.go:320] Caches are synced for crt configmap
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.740835       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m03\" does not exist"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.747985       1 shared_informer.go:320] Caches are synced for GC
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.748593       1 shared_informer.go:320] Caches are synced for job
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.765439       1 shared_informer.go:320] Caches are synced for cronjob
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.765669       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.765982       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.766264       1 shared_informer.go:320] Caches are synced for expand
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.766617       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.767305       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.767462       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.768217       1 shared_informer.go:320] Caches are synced for stateful set
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.766681       1 shared_informer.go:320] Caches are synced for daemon sets
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.774887       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.775167       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.775269       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.775418       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.778028       1 shared_informer.go:320] Caches are synced for HPA
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.793610       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.793916       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0127 12:36:58.077892    9948 command_runner.go:130] ! I0127 12:35:44.798773       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.805302       1 shared_informer.go:320] Caches are synced for PVC protection
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.805404       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.806234       1 shared_informer.go:320] Caches are synced for PV protection
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.811621       1 shared_informer.go:320] Caches are synced for ephemeral
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.813099       1 shared_informer.go:320] Caches are synced for TTL
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.813420       1 shared_informer.go:320] Caches are synced for namespace
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.813655       1 shared_informer.go:320] Caches are synced for deployment
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.815238       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.819201       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.819433       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.820006       1 shared_informer.go:320] Caches are synced for disruption
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.821695       1 shared_informer.go:320] Caches are synced for taint
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.821905       1 shared_informer.go:320] Caches are synced for endpoint
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.824479       1 shared_informer.go:320] Caches are synced for persistent volume
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.824852       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.825228       1 shared_informer.go:320] Caches are synced for attach detach
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.825784       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.836209       1 shared_informer.go:320] Caches are synced for service account
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.836651       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.836969       1 shared_informer.go:320] Caches are synced for node
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.838015       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.838049       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.838058       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.838065       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.838200       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.838217       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.838227       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.844908       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.845551       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.845777       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.898551       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.899476       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.900201       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.900496       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000-m02"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.900687       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000-m03"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.901405       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:44.984858       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:45.000632       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="180.930208ms"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:45.003909       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="39.2µs"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:45.016382       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="195.414857ms"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:45.016698       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="108.2µs"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:35:54.975850       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:36:32.834093       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:36:32.834425       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:36:32.855708       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:36:34.928482       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:36:34.940809       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:36:34.955742       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:36:35.025877       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="15.32946ms"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:36:35.026020       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="30.3µs"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:36:40.041357       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:36:47.580904       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="50.8µs"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:36:48.616631       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="19.328909ms"
	I0127 12:36:58.078896    9948 command_runner.go:130] ! I0127 12:36:48.617909       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="35.8µs"
	I0127 12:36:58.079897    9948 command_runner.go:130] ! I0127 12:36:48.650691       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="23.414753ms"
	I0127 12:36:58.079897    9948 command_runner.go:130] ! I0127 12:36:48.651163       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="28.701µs"
	I0127 12:36:58.095932    9948 logs.go:123] Gathering logs for kindnet [d758000dda95] ...
	I0127 12:36:58.095932    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d758000dda95"
	I0127 12:36:58.121914    9948 command_runner.go:130] ! I0127 12:22:14.854106       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:14.855096       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:14.855184       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:24.859265       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:24.859464       1 main.go:301] handling current node
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:24.859638       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:24.859681       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:24.860150       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:24.860242       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:34.860201       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:34.860282       1 main.go:301] handling current node
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:34.860531       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:34.860551       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:34.861114       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:34.861204       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:44.853677       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:44.853737       1 main.go:301] handling current node
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:44.853761       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.122605    9948 command_runner.go:130] ! I0127 12:22:44.853838       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:22:44.855661       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:22:44.855749       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:22:54.856510       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:22:54.856632       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:22:54.857002       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:22:54.857030       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:22:54.857252       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:22:54.857371       1 main.go:301] handling current node
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:04.859476       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:04.859579       1 main.go:301] handling current node
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:04.859615       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:04.859623       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:04.859972       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:04.859987       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:14.853396       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:14.853515       1 main.go:301] handling current node
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:14.853537       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:14.853546       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:14.853802       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:14.853843       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:24.853600       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:24.853883       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:24.854392       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:24.854484       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:24.854688       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:24.854773       1 main.go:301] handling current node
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:34.853542       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:34.853600       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:34.854132       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:34.854286       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:34.854787       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:34.854920       1 main.go:301] handling current node
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:44.856707       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:44.856833       1 main.go:301] handling current node
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:44.856869       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:44.856877       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:44.857371       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:44.857460       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:54.853590       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:54.853737       1 main.go:301] handling current node
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:54.853759       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:54.853768       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:54.854333       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:23:54.854403       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:04.862983       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:04.863248       1 main.go:301] handling current node
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:04.863599       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:04.863808       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:04.864418       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:04.864558       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:14.854114       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:14.854152       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:14.854412       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:14.854490       1 main.go:301] handling current node
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:14.854619       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:14.854711       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:24.857372       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:24.857503       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:24.857861       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:24.857991       1 main.go:301] handling current node
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:24.858058       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:24.858126       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:34.854371       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:34.854425       1 main.go:301] handling current node
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:34.854444       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:34.854451       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.122908    9948 command_runner.go:130] ! I0127 12:24:34.855276       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:24:34.855359       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:24:44.862967       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:24:44.863069       1 main.go:301] handling current node
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:24:44.863118       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:24:44.863132       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:24:44.863438       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:24:44.863559       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:24:54.856232       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:24:54.856343       1 main.go:301] handling current node
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:24:54.856417       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:24:54.856429       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:24:54.857056       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:24:54.857188       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:04.853438       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:04.853551       1 main.go:301] handling current node
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:04.853573       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:04.853581       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:04.853903       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:04.853979       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:14.854463       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:14.854571       1 main.go:301] handling current node
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:14.854614       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:14.854630       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:14.855124       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:14.855157       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:24.853742       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:24.853838       1 main.go:301] handling current node
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:24.853859       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:24.853866       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:24.854822       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:24.854982       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:34.853374       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:34.853516       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:34.853756       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:34.853919       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:34.854285       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:34.854360       1 main.go:301] handling current node
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:44.855075       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:44.855182       1 main.go:301] handling current node
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:44.855201       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:44.855209       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:44.856108       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:44.856191       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:54.854358       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:54.854550       1 main.go:301] handling current node
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:54.854584       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:54.854606       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:54.854829       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:25:54.854893       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:04.853425       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:04.853480       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:04.854150       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:04.854221       1 main.go:301] handling current node
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:04.854322       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:04.854350       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:14.853895       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:14.854577       1 main.go:301] handling current node
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:14.854615       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:14.854639       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:14.856224       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:14.856319       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:24.858046       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:24.858200       1 main.go:301] handling current node
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:24.858527       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:24.858599       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:24.859022       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:24.859118       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:34.853783       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:34.853853       1 main.go:301] handling current node
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:34.853871       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:34.853878       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:34.854193       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:34.854260       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:44.856492       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:44.856552       1 main.go:301] handling current node
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:44.856569       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:44.856575       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:44.857163       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:44.857246       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:54.858285       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:54.858431       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:54.859101       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:54.859322       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.123903    9948 command_runner.go:130] ! I0127 12:26:54.859474       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:26:54.859544       1 main.go:301] handling current node
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:04.858831       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:04.858967       1 main.go:301] handling current node
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:04.859484       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:04.859592       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:04.860213       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:04.860314       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:14.854313       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:14.854366       1 main.go:301] handling current node
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:14.854386       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:14.854394       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:14.854883       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:14.855322       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:24.859182       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:24.859342       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:24.859757       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:24.859824       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:24.860078       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:24.860255       1 main.go:301] handling current node
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:34.854206       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:34.854462       1 main.go:301] handling current node
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:34.854567       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:34.854657       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:34.855188       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:34.855233       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:44.861342       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:44.861572       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:44.862224       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:44.862399       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:44.862648       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:44.862687       1 main.go:301] handling current node
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:54.853605       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:54.853658       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:54.853924       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:54.854125       1 main.go:301] handling current node
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:54.854203       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:27:54.854216       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:04.859858       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:04.859922       1 main.go:301] handling current node
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:04.859984       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:04.860038       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:04.860336       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:04.860450       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:14.853470       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:14.853607       1 main.go:301] handling current node
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:14.853627       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:14.853634       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:14.854800       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:14.854899       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:24.853786       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:24.853841       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:24.854051       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:24.854078       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:24.854192       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:24.854297       1 main.go:301] handling current node
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:34.853571       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:34.853730       1 main.go:301] handling current node
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:34.853756       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:34.853765       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:34.853988       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:34.854180       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:44.853630       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:44.854161       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:44.854753       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:44.854886       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:44.855270       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:44.855393       1 main.go:301] handling current node
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:54.856731       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:54.856780       1 main.go:301] handling current node
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:54.856800       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:54.856807       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.124915    9948 command_runner.go:130] ! I0127 12:28:54.857466       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:28:54.857531       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:04.853996       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:04.854093       1 main.go:301] handling current node
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:04.854113       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:04.854120       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:04.854865       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:04.855000       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:14.853874       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:14.854279       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:14.854677       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:14.854896       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:14.855469       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:14.856845       1 main.go:301] handling current node
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:24.853660       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:24.853766       1 main.go:301] handling current node
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:24.853786       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:24.853793       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:24.854261       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:24.854541       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:34.861616       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:34.861807       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:34.862166       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:34.862228       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:34.862400       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:34.862455       1 main.go:301] handling current node
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:44.854294       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:44.854418       1 main.go:301] handling current node
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:44.854439       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:44.854448       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:44.854699       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:44.854776       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:54.853707       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:54.853780       1 main.go:301] handling current node
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:54.853914       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:54.854022       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:54.854423       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:29:54.854566       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:04.853625       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:04.853820       1 main.go:301] handling current node
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:04.854002       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:04.854301       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:04.854878       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:04.854986       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:14.853537       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:14.853729       1 main.go:301] handling current node
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:14.853749       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:14.853756       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:14.855013       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:14.855147       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:24.853563       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:24.853757       1 main.go:301] handling current node
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:24.853779       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:24.853786       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:24.854220       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:24.854327       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:34.858899       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:34.859124       1 main.go:301] handling current node
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:34.859146       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:34.859676       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:34.860572       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:34.860819       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:44.858769       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:44.858890       1 main.go:301] handling current node
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:44.858912       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:44.858920       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:44.859720       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:44.859809       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:54.855090       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:54.855134       1 main.go:301] handling current node
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:54.855151       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:54.855157       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:54.855561       1 main.go:297] Handling node with IPs: map[172.29.195.45:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:30:54.855573       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.2.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:31:04.854121       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:31:04.854237       1 main.go:301] handling current node
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:31:04.854256       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:31:04.854263       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:31:04.854424       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:31:04.854452       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:31:04.854544       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.29.206.88 Flags: [] Table: 0 Realm: 0} 
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:31:14.853651       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.125896    9948 command_runner.go:130] ! I0127 12:31:14.853750       1 main.go:301] handling current node
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:14.853771       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:14.853778       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:14.854005       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:14.854084       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:24.854114       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:24.854161       1 main.go:301] handling current node
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:24.854212       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:24.854223       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:24.854591       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:24.854666       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:34.862705       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:34.862793       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:34.863105       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:34.863140       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:34.863334       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:34.863362       1 main.go:301] handling current node
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:44.855275       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:44.855421       1 main.go:301] handling current node
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:44.855462       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:44.855496       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:44.856579       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:44.856690       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:54.856288       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:54.856579       1 main.go:301] handling current node
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:54.856914       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:54.857065       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:54.857508       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:31:54.857553       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:04.853556       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:04.853630       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:04.854583       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:04.854615       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:04.857114       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:04.857217       1 main.go:301] handling current node
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:14.854183       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:14.854348       1 main.go:301] handling current node
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:14.854376       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:14.854402       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:14.854890       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:14.854992       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:24.853770       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:24.854222       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:24.854498       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:24.854573       1 main.go:301] handling current node
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:24.854606       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:24.854613       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:34.853556       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:34.853715       1 main.go:301] handling current node
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:34.853749       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:34.853879       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:34.854386       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:34.854469       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:44.853378       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:44.853424       1 main.go:301] handling current node
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:44.853441       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:44.853447       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:44.853735       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:44.853765       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:54.859317       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:54.859396       1 main.go:301] handling current node
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:54.859415       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:54.859421       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:54.859756       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.126896    9948 command_runner.go:130] ! I0127 12:32:54.859853       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.127909    9948 command_runner.go:130] ! I0127 12:33:04.861975       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.127909    9948 command_runner.go:130] ! I0127 12:33:04.862085       1 main.go:301] handling current node
	I0127 12:36:58.127909    9948 command_runner.go:130] ! I0127 12:33:04.862106       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.127909    9948 command_runner.go:130] ! I0127 12:33:04.862113       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.127909    9948 command_runner.go:130] ! I0127 12:33:04.862780       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.127909    9948 command_runner.go:130] ! I0127 12:33:04.862861       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.127909    9948 command_runner.go:130] ! I0127 12:33:14.853823       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:36:58.127909    9948 command_runner.go:130] ! I0127 12:33:14.853859       1 main.go:301] handling current node
	I0127 12:36:58.127909    9948 command_runner.go:130] ! I0127 12:33:14.853877       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.127909    9948 command_runner.go:130] ! I0127 12:33:14.853884       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.127909    9948 command_runner.go:130] ! I0127 12:33:14.854153       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.127909    9948 command_runner.go:130] ! I0127 12:33:14.854165       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.143920    9948 logs.go:123] Gathering logs for describe nodes ...
	I0127 12:36:58.143920    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0127 12:36:58.334316    9948 command_runner.go:130] > Name:               multinode-659000
	I0127 12:36:58.334408    9948 command_runner.go:130] > Roles:              control-plane
	I0127 12:36:58.334408    9948 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0127 12:36:58.334483    9948 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0127 12:36:58.334483    9948 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0127 12:36:58.334483    9948 command_runner.go:130] >                     kubernetes.io/hostname=multinode-659000
	I0127 12:36:58.334483    9948 command_runner.go:130] >                     kubernetes.io/os=linux
	I0127 12:36:58.334483    9948 command_runner.go:130] >                     minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	I0127 12:36:58.334483    9948 command_runner.go:130] >                     minikube.k8s.io/name=multinode-659000
	I0127 12:36:58.334541    9948 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0127 12:36:58.334541    9948 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_01_27T12_12_00_0700
	I0127 12:36:58.334541    9948 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0127 12:36:58.334541    9948 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0127 12:36:58.334593    9948 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0127 12:36:58.334616    9948 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0127 12:36:58.334616    9948 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0127 12:36:58.334616    9948 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0127 12:36:58.334616    9948 command_runner.go:130] > CreationTimestamp:  Mon, 27 Jan 2025 12:11:55 +0000
	I0127 12:36:58.334687    9948 command_runner.go:130] > Taints:             <none>
	I0127 12:36:58.334687    9948 command_runner.go:130] > Unschedulable:      false
	I0127 12:36:58.334687    9948 command_runner.go:130] > Lease:
	I0127 12:36:58.334687    9948 command_runner.go:130] >   HolderIdentity:  multinode-659000
	I0127 12:36:58.334687    9948 command_runner.go:130] >   AcquireTime:     <unset>
	I0127 12:36:58.334687    9948 command_runner.go:130] >   RenewTime:       Mon, 27 Jan 2025 12:36:52 +0000
	I0127 12:36:58.334687    9948 command_runner.go:130] > Conditions:
	I0127 12:36:58.334687    9948 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0127 12:36:58.334779    9948 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0127 12:36:58.334779    9948 command_runner.go:130] >   MemoryPressure   False   Mon, 27 Jan 2025 12:36:32 +0000   Mon, 27 Jan 2025 12:11:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0127 12:36:58.334859    9948 command_runner.go:130] >   DiskPressure     False   Mon, 27 Jan 2025 12:36:32 +0000   Mon, 27 Jan 2025 12:11:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0127 12:36:58.334859    9948 command_runner.go:130] >   PIDPressure      False   Mon, 27 Jan 2025 12:36:32 +0000   Mon, 27 Jan 2025 12:11:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0127 12:36:58.334887    9948 command_runner.go:130] >   Ready            True    Mon, 27 Jan 2025 12:36:32 +0000   Mon, 27 Jan 2025 12:36:32 +0000   KubeletReady                 kubelet is posting ready status
	I0127 12:36:58.334934    9948 command_runner.go:130] > Addresses:
	I0127 12:36:58.334956    9948 command_runner.go:130] >   InternalIP:  172.29.198.106
	I0127 12:36:58.334956    9948 command_runner.go:130] >   Hostname:    multinode-659000
	I0127 12:36:58.334982    9948 command_runner.go:130] > Capacity:
	I0127 12:36:58.334982    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:58.334982    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:58.335025    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:58.335025    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:58.335025    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:58.335061    9948 command_runner.go:130] > Allocatable:
	I0127 12:36:58.335061    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:58.335061    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:58.335061    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:58.335061    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:58.335061    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:58.335061    9948 command_runner.go:130] > System Info:
	I0127 12:36:58.335061    9948 command_runner.go:130] >   Machine ID:                 312902fc96b948148d51eecf097c4a9d
	I0127 12:36:58.335061    9948 command_runner.go:130] >   System UUID:                be6234aa-9e29-bb41-8165-59b265a4d7d0
	I0127 12:36:58.335061    9948 command_runner.go:130] >   Boot ID:                    058425a5-0652-4c5c-a517-2369b8cac13d
	I0127 12:36:58.335061    9948 command_runner.go:130] >   Kernel Version:             5.10.207
	I0127 12:36:58.335061    9948 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0127 12:36:58.335061    9948 command_runner.go:130] >   Operating System:           linux
	I0127 12:36:58.335209    9948 command_runner.go:130] >   Architecture:               amd64
	I0127 12:36:58.335209    9948 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0127 12:36:58.335209    9948 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0127 12:36:58.335209    9948 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0127 12:36:58.335252    9948 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0127 12:36:58.335252    9948 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0127 12:36:58.335306    9948 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0127 12:36:58.335306    9948 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0127 12:36:58.335306    9948 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0127 12:36:58.335306    9948 command_runner.go:130] >   default                     busybox-58667487b6-2jq9j                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0127 12:36:58.335374    9948 command_runner.go:130] >   kube-system                 coredns-668d6bf9bc-2qw6w                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     24m
	I0127 12:36:58.335374    9948 command_runner.go:130] >   kube-system                 etcd-multinode-659000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         76s
	I0127 12:36:58.335374    9948 command_runner.go:130] >   kube-system                 kindnet-z2hqq                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	I0127 12:36:58.335435    9948 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-659000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         76s
	I0127 12:36:58.335460    9948 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-659000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         24m
	I0127 12:36:58.335517    9948 command_runner.go:130] >   kube-system                 kube-proxy-s46mv                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0127 12:36:58.335559    9948 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-659000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         25m
	I0127 12:36:58.335580    9948 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	I0127 12:36:58.335580    9948 command_runner.go:130] > Allocated resources:
	I0127 12:36:58.335580    9948 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0127 12:36:58.335580    9948 command_runner.go:130] >   Resource           Requests     Limits
	I0127 12:36:58.335580    9948 command_runner.go:130] >   --------           --------     ------
	I0127 12:36:58.335635    9948 command_runner.go:130] >   cpu                850m (42%)   100m (5%)
	I0127 12:36:58.335657    9948 command_runner.go:130] >   memory             220Mi (10%)  220Mi (10%)
	I0127 12:36:58.335657    9948 command_runner.go:130] >   ephemeral-storage  0 (0%)       0 (0%)
	I0127 12:36:58.335680    9948 command_runner.go:130] >   hugepages-2Mi      0 (0%)       0 (0%)
	I0127 12:36:58.335680    9948 command_runner.go:130] > Events:
	I0127 12:36:58.335704    9948 command_runner.go:130] >   Type     Reason                   Age                From             Message
	I0127 12:36:58.335733    9948 command_runner.go:130] >   ----     ------                   ----               ----             -------
	I0127 12:36:58.335733    9948 command_runner.go:130] >   Normal   Starting                 24m                kube-proxy       
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   Starting                 73s                kube-proxy       
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   Starting                 25m                kubelet          Starting kubelet.
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   NodeHasSufficientMemory  25m (x8 over 25m)  kubelet          Node multinode-659000 status is now: NodeHasSufficientMemory
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    25m (x8 over 25m)  kubelet          Node multinode-659000 status is now: NodeHasNoDiskPressure
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   NodeHasSufficientPID     25m (x7 over 25m)  kubelet          Node multinode-659000 status is now: NodeHasSufficientPID
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   NodeAllocatableEnforced  25m                kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    24m                kubelet          Node multinode-659000 status is now: NodeHasNoDiskPressure
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   NodeHasSufficientMemory  24m                kubelet          Node multinode-659000 status is now: NodeHasSufficientMemory
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   NodeHasSufficientPID     24m                kubelet          Node multinode-659000 status is now: NodeHasSufficientPID
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   Starting                 24m                kubelet          Starting kubelet.
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   RegisteredNode           24m                node-controller  Node multinode-659000 event: Registered Node multinode-659000 in Controller
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   NodeReady                24m                kubelet          Node multinode-659000 status is now: NodeReady
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   Starting                 82s                kubelet          Starting kubelet.
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   NodeHasSufficientMemory  82s (x8 over 82s)  kubelet          Node multinode-659000 status is now: NodeHasSufficientMemory
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   NodeHasNoDiskPressure    82s (x8 over 82s)  kubelet          Node multinode-659000 status is now: NodeHasNoDiskPressure
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   NodeHasSufficientPID     82s (x7 over 82s)  kubelet          Node multinode-659000 status is now: NodeHasSufficientPID
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Warning  Rebooted                 77s                kubelet          Node multinode-659000 has been rebooted, boot id: 058425a5-0652-4c5c-a517-2369b8cac13d
	I0127 12:36:58.335759    9948 command_runner.go:130] >   Normal   RegisteredNode           74s                node-controller  Node multinode-659000 event: Registered Node multinode-659000 in Controller
	I0127 12:36:58.335759    9948 command_runner.go:130] > Name:               multinode-659000-m02
	I0127 12:36:58.335759    9948 command_runner.go:130] > Roles:              <none>
	I0127 12:36:58.335759    9948 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0127 12:36:58.335759    9948 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0127 12:36:58.335759    9948 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0127 12:36:58.335759    9948 command_runner.go:130] >                     kubernetes.io/hostname=multinode-659000-m02
	I0127 12:36:58.335759    9948 command_runner.go:130] >                     kubernetes.io/os=linux
	I0127 12:36:58.335759    9948 command_runner.go:130] >                     minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	I0127 12:36:58.335759    9948 command_runner.go:130] >                     minikube.k8s.io/name=multinode-659000
	I0127 12:36:58.335759    9948 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0127 12:36:58.336287    9948 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_01_27T12_15_08_0700
	I0127 12:36:58.336347    9948 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0127 12:36:58.336347    9948 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0127 12:36:58.336347    9948 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0127 12:36:58.336507    9948 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0127 12:36:58.336507    9948 command_runner.go:130] > CreationTimestamp:  Mon, 27 Jan 2025 12:15:07 +0000
	I0127 12:36:58.336507    9948 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0127 12:36:58.336507    9948 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0127 12:36:58.336507    9948 command_runner.go:130] > Unschedulable:      false
	I0127 12:36:58.336507    9948 command_runner.go:130] > Lease:
	I0127 12:36:58.336507    9948 command_runner.go:130] >   HolderIdentity:  multinode-659000-m02
	I0127 12:36:58.336507    9948 command_runner.go:130] >   AcquireTime:     <unset>
	I0127 12:36:58.336507    9948 command_runner.go:130] >   RenewTime:       Mon, 27 Jan 2025 12:32:39 +0000
	I0127 12:36:58.336507    9948 command_runner.go:130] > Conditions:
	I0127 12:36:58.336507    9948 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0127 12:36:58.336507    9948 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0127 12:36:58.336507    9948 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 27 Jan 2025 12:28:32 +0000   Mon, 27 Jan 2025 12:36:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:58.336507    9948 command_runner.go:130] >   DiskPressure     Unknown   Mon, 27 Jan 2025 12:28:32 +0000   Mon, 27 Jan 2025 12:36:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:58.336507    9948 command_runner.go:130] >   PIDPressure      Unknown   Mon, 27 Jan 2025 12:28:32 +0000   Mon, 27 Jan 2025 12:36:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:58.336507    9948 command_runner.go:130] >   Ready            Unknown   Mon, 27 Jan 2025 12:28:32 +0000   Mon, 27 Jan 2025 12:36:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:58.336507    9948 command_runner.go:130] > Addresses:
	I0127 12:36:58.336507    9948 command_runner.go:130] >   InternalIP:  172.29.199.129
	I0127 12:36:58.336507    9948 command_runner.go:130] >   Hostname:    multinode-659000-m02
	I0127 12:36:58.336507    9948 command_runner.go:130] > Capacity:
	I0127 12:36:58.336507    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:58.336507    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:58.336507    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:58.336507    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:58.336507    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:58.336507    9948 command_runner.go:130] > Allocatable:
	I0127 12:36:58.336507    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:58.336507    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:58.336507    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:58.336507    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:58.336507    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:58.336507    9948 command_runner.go:130] > System Info:
	I0127 12:36:58.336507    9948 command_runner.go:130] >   Machine ID:                 30ce15ff72904b54b07c49f3e2f28802
	I0127 12:36:58.336507    9948 command_runner.go:130] >   System UUID:                b6923799-fa1e-b54c-9340-50dd6a2378f5
	I0127 12:36:58.336507    9948 command_runner.go:130] >   Boot ID:                    3308d183-ec79-4aeb-9d90-80d47cdbff63
	I0127 12:36:58.336507    9948 command_runner.go:130] >   Kernel Version:             5.10.207
	I0127 12:36:58.336507    9948 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0127 12:36:58.336507    9948 command_runner.go:130] >   Operating System:           linux
	I0127 12:36:58.336507    9948 command_runner.go:130] >   Architecture:               amd64
	I0127 12:36:58.336507    9948 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0127 12:36:58.336507    9948 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0127 12:36:58.336507    9948 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0127 12:36:58.337030    9948 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0127 12:36:58.337030    9948 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0127 12:36:58.337030    9948 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0127 12:36:58.337086    9948 command_runner.go:130] >   Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0127 12:36:58.337154    9948 command_runner.go:130] >   ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	I0127 12:36:58.337154    9948 command_runner.go:130] >   default                     busybox-58667487b6-ktfxc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	I0127 12:36:58.337183    9948 command_runner.go:130] >   kube-system                 kindnet-n7vjl               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      21m
	I0127 12:36:58.337214    9948 command_runner.go:130] >   kube-system                 kube-proxy-pjhc8            0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	I0127 12:36:58.337214    9948 command_runner.go:130] > Allocated resources:
	I0127 12:36:58.337214    9948 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0127 12:36:58.337214    9948 command_runner.go:130] >   Resource           Requests   Limits
	I0127 12:36:58.337214    9948 command_runner.go:130] >   --------           --------   ------
	I0127 12:36:58.337214    9948 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0127 12:36:58.337214    9948 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0127 12:36:58.337214    9948 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0127 12:36:58.337214    9948 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0127 12:36:58.337309    9948 command_runner.go:130] > Events:
	I0127 12:36:58.337309    9948 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0127 12:36:58.337309    9948 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0127 12:36:58.337338    9948 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0127 12:36:58.337338    9948 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m (x2 over 21m)  kubelet          Node multinode-659000-m02 status is now: NodeHasSufficientMemory
	I0127 12:36:58.337338    9948 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m (x2 over 21m)  kubelet          Node multinode-659000-m02 status is now: NodeHasNoDiskPressure
	I0127 12:36:58.337338    9948 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m (x2 over 21m)  kubelet          Node multinode-659000-m02 status is now: NodeHasSufficientPID
	I0127 12:36:58.337338    9948 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:58.337338    9948 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-659000-m02 event: Registered Node multinode-659000-m02 in Controller
	I0127 12:36:58.337338    9948 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-659000-m02 status is now: NodeReady
	I0127 12:36:58.337338    9948 command_runner.go:130] >   Normal  RegisteredNode           74s                node-controller  Node multinode-659000-m02 event: Registered Node multinode-659000-m02 in Controller
	I0127 12:36:58.337338    9948 command_runner.go:130] >   Normal  NodeNotReady             24s                node-controller  Node multinode-659000-m02 status is now: NodeNotReady
	I0127 12:36:58.337338    9948 command_runner.go:130] > Name:               multinode-659000-m03
	I0127 12:36:58.337338    9948 command_runner.go:130] > Roles:              <none>
	I0127 12:36:58.337338    9948 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0127 12:36:58.337338    9948 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0127 12:36:58.337338    9948 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0127 12:36:58.337338    9948 command_runner.go:130] >                     kubernetes.io/hostname=multinode-659000-m03
	I0127 12:36:58.337338    9948 command_runner.go:130] >                     kubernetes.io/os=linux
	I0127 12:36:58.337338    9948 command_runner.go:130] >                     minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	I0127 12:36:58.337338    9948 command_runner.go:130] >                     minikube.k8s.io/name=multinode-659000
	I0127 12:36:58.337338    9948 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0127 12:36:58.337338    9948 command_runner.go:130] >                     minikube.k8s.io/updated_at=2025_01_27T12_31_04_0700
	I0127 12:36:58.337338    9948 command_runner.go:130] >                     minikube.k8s.io/version=v1.35.0
	I0127 12:36:58.337338    9948 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0127 12:36:58.337338    9948 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0127 12:36:58.337338    9948 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0127 12:36:58.337338    9948 command_runner.go:130] > CreationTimestamp:  Mon, 27 Jan 2025 12:31:04 +0000
	I0127 12:36:58.337338    9948 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0127 12:36:58.337338    9948 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0127 12:36:58.337338    9948 command_runner.go:130] > Unschedulable:      false
	I0127 12:36:58.337338    9948 command_runner.go:130] > Lease:
	I0127 12:36:58.337338    9948 command_runner.go:130] >   HolderIdentity:  multinode-659000-m03
	I0127 12:36:58.337338    9948 command_runner.go:130] >   AcquireTime:     <unset>
	I0127 12:36:58.337338    9948 command_runner.go:130] >   RenewTime:       Mon, 27 Jan 2025 12:32:15 +0000
	I0127 12:36:58.337338    9948 command_runner.go:130] > Conditions:
	I0127 12:36:58.337338    9948 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0127 12:36:58.337338    9948 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0127 12:36:58.337338    9948 command_runner.go:130] >   MemoryPressure   Unknown   Mon, 27 Jan 2025 12:31:22 +0000   Mon, 27 Jan 2025 12:33:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:58.337338    9948 command_runner.go:130] >   DiskPressure     Unknown   Mon, 27 Jan 2025 12:31:22 +0000   Mon, 27 Jan 2025 12:33:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:58.337338    9948 command_runner.go:130] >   PIDPressure      Unknown   Mon, 27 Jan 2025 12:31:22 +0000   Mon, 27 Jan 2025 12:33:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:58.337338    9948 command_runner.go:130] >   Ready            Unknown   Mon, 27 Jan 2025 12:31:22 +0000   Mon, 27 Jan 2025 12:33:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0127 12:36:58.337338    9948 command_runner.go:130] > Addresses:
	I0127 12:36:58.337338    9948 command_runner.go:130] >   InternalIP:  172.29.206.88
	I0127 12:36:58.337338    9948 command_runner.go:130] >   Hostname:    multinode-659000-m03
	I0127 12:36:58.337338    9948 command_runner.go:130] > Capacity:
	I0127 12:36:58.337338    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:58.337338    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:58.337866    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:58.337866    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:58.337925    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:58.337925    9948 command_runner.go:130] > Allocatable:
	I0127 12:36:58.337925    9948 command_runner.go:130] >   cpu:                2
	I0127 12:36:58.337925    9948 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0127 12:36:58.337925    9948 command_runner.go:130] >   hugepages-2Mi:      0
	I0127 12:36:58.337925    9948 command_runner.go:130] >   memory:             2164264Ki
	I0127 12:36:58.337925    9948 command_runner.go:130] >   pods:               110
	I0127 12:36:58.337925    9948 command_runner.go:130] > System Info:
	I0127 12:36:58.337925    9948 command_runner.go:130] >   Machine ID:                 5cd7b7bdbad940e0831e949f70fdd5af
	I0127 12:36:58.337925    9948 command_runner.go:130] >   System UUID:                bab0a90b-9ed8-ba42-88b9-fc6568ad7a53
	I0127 12:36:58.338031    9948 command_runner.go:130] >   Boot ID:                    9d0d04c8-71ef-487a-a13c-e1de6463b3fe
	I0127 12:36:58.338031    9948 command_runner.go:130] >   Kernel Version:             5.10.207
	I0127 12:36:58.338031    9948 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0127 12:36:58.338031    9948 command_runner.go:130] >   Operating System:           linux
	I0127 12:36:58.338031    9948 command_runner.go:130] >   Architecture:               amd64
	I0127 12:36:58.338107    9948 command_runner.go:130] >   Container Runtime Version:  docker://27.4.0
	I0127 12:36:58.338107    9948 command_runner.go:130] >   Kubelet Version:            v1.32.1
	I0127 12:36:58.338128    9948 command_runner.go:130] >   Kube-Proxy Version:         v1.32.1
	I0127 12:36:58.338155    9948 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0127 12:36:58.338155    9948 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0127 12:36:58.338155    9948 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0127 12:36:58.338155    9948 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0127 12:36:58.338155    9948 command_runner.go:130] >   kube-system                 kindnet-kpfjt       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      17m
	I0127 12:36:58.338155    9948 command_runner.go:130] >   kube-system                 kube-proxy-sk5js    0 (0%)        0 (0%)      0 (0%)           0 (0%)         17m
	I0127 12:36:58.338155    9948 command_runner.go:130] > Allocated resources:
	I0127 12:36:58.338155    9948 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Resource           Requests   Limits
	I0127 12:36:58.338155    9948 command_runner.go:130] >   --------           --------   ------
	I0127 12:36:58.338155    9948 command_runner.go:130] >   cpu                100m (5%)  100m (5%)
	I0127 12:36:58.338155    9948 command_runner.go:130] >   memory             50Mi (2%)  50Mi (2%)
	I0127 12:36:58.338155    9948 command_runner.go:130] >   ephemeral-storage  0 (0%)     0 (0%)
	I0127 12:36:58.338155    9948 command_runner.go:130] >   hugepages-2Mi      0 (0%)     0 (0%)
	I0127 12:36:58.338155    9948 command_runner.go:130] > Events:
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0127 12:36:58.338155    9948 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Normal  Starting                 16m                    kube-proxy       
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Normal  Starting                 5m50s                  kube-proxy       
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Normal  NodeHasSufficientMemory  17m (x2 over 17m)      kubelet          Node multinode-659000-m03 status is now: NodeHasSufficientMemory
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Normal  NodeHasSufficientPID     17m (x2 over 17m)      kubelet          Node multinode-659000-m03 status is now: NodeHasSufficientPID
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Normal  NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    17m (x2 over 17m)      kubelet          Node multinode-659000-m03 status is now: NodeHasNoDiskPressure
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Normal  NodeReady                16m                    kubelet          Node multinode-659000-m03 status is now: NodeReady
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Normal  Starting                 5m55s                  kubelet          Starting kubelet.
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Normal  CIDRAssignmentFailed     5m54s                  cidrAllocator    Node multinode-659000-m03 status is now: CIDRAssignmentFailed
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m54s (x2 over 5m54s)  kubelet          Node multinode-659000-m03 status is now: NodeHasSufficientMemory
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m54s (x2 over 5m54s)  kubelet          Node multinode-659000-m03 status is now: NodeHasNoDiskPressure
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m54s (x2 over 5m54s)  kubelet          Node multinode-659000-m03 status is now: NodeHasSufficientPID
	I0127 12:36:58.338155    9948 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m54s                  kubelet          Updated Node Allocatable limit across pods
	I0127 12:36:58.338677    9948 command_runner.go:130] >   Normal  RegisteredNode           5m50s                  node-controller  Node multinode-659000-m03 event: Registered Node multinode-659000-m03 in Controller
	I0127 12:36:58.338733    9948 command_runner.go:130] >   Normal  NodeReady                5m36s                  kubelet          Node multinode-659000-m03 status is now: NodeReady
	I0127 12:36:58.338733    9948 command_runner.go:130] >   Normal  NodeNotReady             3m50s                  node-controller  Node multinode-659000-m03 status is now: NodeNotReady
	I0127 12:36:58.338733    9948 command_runner.go:130] >   Normal  RegisteredNode           74s                    node-controller  Node multinode-659000-m03 event: Registered Node multinode-659000-m03 in Controller
	I0127 12:36:58.348475    9948 logs.go:123] Gathering logs for kube-proxy [bbec7ccef7da] ...
	I0127 12:36:58.348475    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbec7ccef7da"
	I0127 12:36:58.383936    9948 command_runner.go:130] ! I0127 12:12:05.290111       1 server_linux.go:66] "Using iptables proxy"
	I0127 12:36:58.383936    9948 command_runner.go:130] ! E0127 12:12:05.321300       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0127 12:36:58.383936    9948 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-24: Error: Could not process rule: Operation not supported
	I0127 12:36:58.383936    9948 command_runner.go:130] ! 	add table ip kube-proxy
	I0127 12:36:58.383936    9948 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:58.383936    9948 command_runner.go:130] !  >
	I0127 12:36:58.383936    9948 command_runner.go:130] ! E0127 12:12:05.352123       1 proxier.go:733] "Error cleaning up nftables rules" err=<
	I0127 12:36:58.383936    9948 command_runner.go:130] ! 	could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
	I0127 12:36:58.383936    9948 command_runner.go:130] ! 	add table ip6 kube-proxy
	I0127 12:36:58.383936    9948 command_runner.go:130] ! 	^^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:58.383936    9948 command_runner.go:130] !  >
	I0127 12:36:58.383936    9948 command_runner.go:130] ! I0127 12:12:05.378799       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.204.17"]
	I0127 12:36:58.383936    9948 command_runner.go:130] ! E0127 12:12:05.378872       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 12:36:58.383936    9948 command_runner.go:130] ! I0127 12:12:05.470419       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 12:36:58.383936    9948 command_runner.go:130] ! I0127 12:12:05.470552       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 12:36:58.383936    9948 command_runner.go:130] ! I0127 12:12:05.470596       1 server_linux.go:170] "Using iptables Proxier"
	I0127 12:36:58.383936    9948 command_runner.go:130] ! I0127 12:12:05.475557       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 12:36:58.383936    9948 command_runner.go:130] ! I0127 12:12:05.476697       1 server.go:497] "Version info" version="v1.32.1"
	I0127 12:36:58.383936    9948 command_runner.go:130] ! I0127 12:12:05.476717       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:58.383936    9948 command_runner.go:130] ! I0127 12:12:05.478788       1 config.go:199] "Starting service config controller"
	I0127 12:36:58.384955    9948 command_runner.go:130] ! I0127 12:12:05.478844       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 12:36:58.384955    9948 command_runner.go:130] ! I0127 12:12:05.478916       1 config.go:105] "Starting endpoint slice config controller"
	I0127 12:36:58.384955    9948 command_runner.go:130] ! I0127 12:12:05.479018       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 12:36:58.384955    9948 command_runner.go:130] ! I0127 12:12:05.480053       1 config.go:329] "Starting node config controller"
	I0127 12:36:58.384955    9948 command_runner.go:130] ! I0127 12:12:05.480113       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 12:36:58.384955    9948 command_runner.go:130] ! I0127 12:12:05.579605       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 12:36:58.384955    9948 command_runner.go:130] ! I0127 12:12:05.579669       1 shared_informer.go:320] Caches are synced for service config
	I0127 12:36:58.384955    9948 command_runner.go:130] ! I0127 12:12:05.580463       1 shared_informer.go:320] Caches are synced for node config
	I0127 12:36:58.387934    9948 logs.go:123] Gathering logs for kube-controller-manager [e07a66f8f619] ...
	I0127 12:36:58.387934    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e07a66f8f619"
	I0127 12:36:58.422542    9948 command_runner.go:130] ! I0127 12:11:53.668834       1 serving.go:386] Generated self-signed cert in-memory
	I0127 12:36:58.422617    9948 command_runner.go:130] ! I0127 12:11:53.986868       1 controllermanager.go:185] "Starting" version="v1.32.1"
	I0127 12:36:58.422638    9948 command_runner.go:130] ! I0127 12:11:53.987309       1 controllermanager.go:187] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:58.422638    9948 command_runner.go:130] ! I0127 12:11:53.989401       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0127 12:36:58.422638    9948 command_runner.go:130] ! I0127 12:11:53.990012       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:58.422638    9948 command_runner.go:130] ! I0127 12:11:53.990187       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0127 12:36:58.422698    9948 command_runner.go:130] ! I0127 12:11:53.990322       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0127 12:36:58.422723    9948 command_runner.go:130] ! I0127 12:11:58.581695       1 controllermanager.go:765] "Started controller" controller="serviceaccount-token-controller"
	I0127 12:36:58.422723    9948 command_runner.go:130] ! I0127 12:11:58.581741       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0127 12:36:58.422723    9948 command_runner.go:130] ! I0127 12:11:58.615284       1 controllermanager.go:765] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0127 12:36:58.422723    9948 command_runner.go:130] ! I0127 12:11:58.615497       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0127 12:36:58.422805    9948 command_runner.go:130] ! I0127 12:11:58.615545       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0127 12:36:58.422805    9948 command_runner.go:130] ! I0127 12:11:58.626456       1 controllermanager.go:765] "Started controller" controller="endpoints-controller"
	I0127 12:36:58.422805    9948 command_runner.go:130] ! I0127 12:11:58.626896       1 endpoints_controller.go:182] "Starting endpoint controller" logger="endpoints-controller"
	I0127 12:36:58.422911    9948 command_runner.go:130] ! I0127 12:11:58.626952       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0127 12:36:58.422931    9948 command_runner.go:130] ! I0127 12:11:58.636784       1 controllermanager.go:765] "Started controller" controller="serviceaccount-controller"
	I0127 12:36:58.422931    9948 command_runner.go:130] ! I0127 12:11:58.636866       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I0127 12:36:58.422983    9948 command_runner.go:130] ! I0127 12:11:58.637077       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I0127 12:36:58.423004    9948 command_runner.go:130] ! I0127 12:11:58.637108       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0127 12:36:58.423004    9948 command_runner.go:130] ! I0127 12:11:58.649619       1 controllermanager.go:765] "Started controller" controller="persistentvolume-expander-controller"
	I0127 12:36:58.423004    9948 command_runner.go:130] ! I0127 12:11:58.649750       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="volumeattributesclass-protection-controller" requiredFeatureGates=["VolumeAttributesClass"]
	I0127 12:36:58.423004    9948 command_runner.go:130] ! I0127 12:11:58.649765       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0127 12:36:58.423089    9948 command_runner.go:130] ! I0127 12:11:58.650223       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0127 12:36:58.423089    9948 command_runner.go:130] ! I0127 12:11:58.650457       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0127 12:36:58.423155    9948 command_runner.go:130] ! I0127 12:11:58.682646       1 shared_informer.go:320] Caches are synced for tokens
	I0127 12:36:58.423155    9948 command_runner.go:130] ! I0127 12:11:58.684061       1 controllermanager.go:765] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0127 12:36:58.423155    9948 command_runner.go:130] ! I0127 12:11:58.684098       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0127 12:36:58.423234    9948 command_runner.go:130] ! I0127 12:11:58.698781       1 controllermanager.go:765] "Started controller" controller="disruption-controller"
	I0127 12:36:58.423234    9948 command_runner.go:130] ! I0127 12:11:58.699001       1 disruption.go:452] "Sending events to api server." logger="disruption-controller"
	I0127 12:36:58.423234    9948 command_runner.go:130] ! I0127 12:11:58.699050       1 disruption.go:463] "Starting disruption controller" logger="disruption-controller"
	I0127 12:36:58.423234    9948 command_runner.go:130] ! I0127 12:11:58.699060       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0127 12:36:58.423288    9948 command_runner.go:130] ! I0127 12:11:58.720187       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0127 12:36:58.423308    9948 command_runner.go:130] ! I0127 12:11:58.720450       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I0127 12:36:58.423308    9948 command_runner.go:130] ! I0127 12:11:58.725202       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0127 12:36:58.423390    9948 command_runner.go:130] ! I0127 12:11:58.736652       1 controllermanager.go:765] "Started controller" controller="endpointslice-mirroring-controller"
	I0127 12:36:58.423390    9948 command_runner.go:130] ! I0127 12:11:58.737667       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0127 12:36:58.423460    9948 command_runner.go:130] ! I0127 12:11:58.738017       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0127 12:36:58.423483    9948 command_runner.go:130] ! I0127 12:11:58.758863       1 controllermanager.go:765] "Started controller" controller="pod-garbage-collector-controller"
	I0127 12:36:58.423483    9948 command_runner.go:130] ! I0127 12:11:58.759137       1 controllermanager.go:743] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0127 12:36:58.423483    9948 command_runner.go:130] ! I0127 12:11:58.759589       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0127 12:36:58.423536    9948 command_runner.go:130] ! I0127 12:11:58.759751       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0127 12:36:58.423536    9948 command_runner.go:130] ! I0127 12:11:58.778737       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0127 12:36:58.423536    9948 command_runner.go:130] ! I0127 12:11:58.779301       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0127 12:36:58.423536    9948 command_runner.go:130] ! I0127 12:11:58.794263       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0127 12:36:58.423603    9948 command_runner.go:130] ! I0127 12:11:58.805098       1 controllermanager.go:765] "Started controller" controller="persistentvolume-protection-controller"
	I0127 12:36:58.423639    9948 command_runner.go:130] ! I0127 12:11:58.805155       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0127 12:36:58.423639    9948 command_runner.go:130] ! I0127 12:11:58.805917       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0127 12:36:58.423639    9948 command_runner.go:130] ! I0127 12:11:58.889766       1 controllermanager.go:765] "Started controller" controller="replicationcontroller-controller"
	I0127 12:36:58.423695    9948 command_runner.go:130] ! I0127 12:11:58.889864       1 replica_set.go:217] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0127 12:36:58.423716    9948 command_runner.go:130] ! I0127 12:11:58.889880       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0127 12:36:58.423716    9948 command_runner.go:130] ! I0127 12:11:59.169736       1 controllermanager.go:765] "Started controller" controller="namespace-controller"
	I0127 12:36:58.423716    9948 command_runner.go:130] ! I0127 12:11:59.169792       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I0127 12:36:58.423716    9948 command_runner.go:130] ! I0127 12:11:59.169804       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0127 12:36:58.423807    9948 command_runner.go:130] ! I0127 12:11:59.292507       1 controllermanager.go:765] "Started controller" controller="replicaset-controller"
	I0127 12:36:58.423807    9948 command_runner.go:130] ! I0127 12:11:59.292665       1 replica_set.go:217] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0127 12:36:58.423807    9948 command_runner.go:130] ! I0127 12:11:59.292680       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0127 12:36:58.423865    9948 command_runner.go:130] ! I0127 12:11:59.451231       1 controllermanager.go:765] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0127 12:36:58.423890    9948 command_runner.go:130] ! I0127 12:11:59.451328       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:11:59.451387       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:11:59.451649       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:11:59.594702       1 controllermanager.go:765] "Started controller" controller="endpointslice-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:11:59.594829       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="service-lb-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:11:59.595498       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:11:59.595889       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:11:59.744969       1 controllermanager.go:765] "Started controller" controller="persistentvolume-binder-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:11:59.745617       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:11:59.745871       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:11:59.892444       1 controllermanager.go:765] "Started controller" controller="ttl-after-finished-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:11:59.892907       1 ttlafterfinished_controller.go:112] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:11:59.893093       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.136328       1 controllermanager.go:765] "Started controller" controller="garbage-collector-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.136634       1 garbagecollector.go:144] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.136654       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.136681       1 graph_builder.go:351] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.425858       1 node_lifecycle_controller.go:432] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.426027       1 controllermanager.go:765] "Started controller" controller="node-lifecycle-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.426047       1 controllermanager.go:723] "Skipping a cloud provider controller" controller="node-route-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.426160       1 node_lifecycle_controller.go:466] "Sending events to api server" logger="node-lifecycle-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.426327       1 node_lifecycle_controller.go:477] "Starting node controller" logger="node-lifecycle-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.426356       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.685414       1 controllermanager.go:765] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.685471       1 horizontal.go:201] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.685482       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.841490       1 controllermanager.go:765] "Started controller" controller="cronjob-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.841888       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.841953       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.888027       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.888135       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.888174       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.889767       1 controllermanager.go:765] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.889893       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0127 12:36:58.423912    9948 command_runner.go:130] ! I0127 12:12:00.889957       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0127 12:36:58.424447    9948 command_runner.go:130] ! I0127 12:12:00.890020       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0127 12:36:58.424487    9948 command_runner.go:130] ! I0127 12:12:00.890047       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0127 12:36:58.424487    9948 command_runner.go:130] ! I0127 12:12:00.890072       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0127 12:36:58.424487    9948 command_runner.go:130] ! I0127 12:12:00.890079       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0127 12:36:58.424487    9948 command_runner.go:130] ! I0127 12:12:00.890101       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:58.424584    9948 command_runner.go:130] ! I0127 12:12:00.890256       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:58.424584    9948 command_runner.go:130] ! I0127 12:12:00.890391       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0127 12:36:58.424584    9948 command_runner.go:130] ! I0127 12:12:01.042988       1 controllermanager.go:765] "Started controller" controller="token-cleaner-controller"
	I0127 12:36:58.424584    9948 command_runner.go:130] ! I0127 12:12:01.043513       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0127 12:36:58.424651    9948 command_runner.go:130] ! I0127 12:12:01.043602       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0127 12:36:58.424651    9948 command_runner.go:130] ! I0127 12:12:01.043761       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0127 12:36:58.424651    9948 command_runner.go:130] ! W0127 12:12:01.189051       1 type.go:183] The watchlist request for nodes ended with an error, falling back to the standard LIST semantics, err = nodes is forbidden: User "system:serviceaccount:kube-system:node-controller" cannot watch resource "nodes" in API group "" at the cluster scope
	I0127 12:36:58.424709    9948 command_runner.go:130] ! I0127 12:12:01.192613       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0127 12:36:58.424709    9948 command_runner.go:130] ! I0127 12:12:01.192663       1 controllermanager.go:765] "Started controller" controller="node-ipam-controller"
	I0127 12:36:58.424709    9948 command_runner.go:130] ! I0127 12:12:01.193062       1 node_ipam_controller.go:141] "Starting ipam controller" logger="node-ipam-controller"
	I0127 12:36:58.424709    9948 command_runner.go:130] ! I0127 12:12:01.193147       1 shared_informer.go:313] Waiting for caches to sync for node
	I0127 12:36:58.424709    9948 command_runner.go:130] ! I0127 12:12:01.493812       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0127 12:36:58.424807    9948 command_runner.go:130] ! I0127 12:12:01.493885       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0127 12:36:58.424807    9948 command_runner.go:130] ! I0127 12:12:01.493919       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0127 12:36:58.424867    9948 command_runner.go:130] ! I0127 12:12:01.494208       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0127 12:36:58.424867    9948 command_runner.go:130] ! I0127 12:12:01.494371       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0127 12:36:58.424867    9948 command_runner.go:130] ! I0127 12:12:01.494391       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0127 12:36:58.424950    9948 command_runner.go:130] ! I0127 12:12:01.494413       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0127 12:36:58.424976    9948 command_runner.go:130] ! I0127 12:12:01.494456       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0127 12:36:58.425030    9948 command_runner.go:130] ! I0127 12:12:01.494473       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0127 12:36:58.425055    9948 command_runner.go:130] ! I0127 12:12:01.494487       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0127 12:36:58.425055    9948 command_runner.go:130] ! I0127 12:12:01.494531       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0127 12:36:58.425055    9948 command_runner.go:130] ! I0127 12:12:01.494547       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0127 12:36:58.425114    9948 command_runner.go:130] ! I0127 12:12:01.494617       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0127 12:36:58.425114    9948 command_runner.go:130] ! I0127 12:12:01.494687       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0127 12:36:58.425217    9948 command_runner.go:130] ! I0127 12:12:01.494717       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0127 12:36:58.425217    9948 command_runner.go:130] ! I0127 12:12:01.494749       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0127 12:36:58.425217    9948 command_runner.go:130] ! I0127 12:12:01.494763       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0127 12:36:58.425294    9948 command_runner.go:130] ! I0127 12:12:01.494781       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0127 12:36:58.425345    9948 command_runner.go:130] ! I0127 12:12:01.494815       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0127 12:36:58.425385    9948 command_runner.go:130] ! I0127 12:12:01.494890       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:01.495196       1 controllermanager.go:765] "Started controller" controller="resourcequota-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:01.495268       1 resource_quota_controller.go:300] "Starting resource quota controller" logger="resourcequota-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:01.495404       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:01.495519       1 resource_quota_monitor.go:308] "QuotaMonitor running" logger="resourcequota-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:01.640900       1 controllermanager.go:765] "Started controller" controller="daemonset-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:01.641423       1 daemon_controller.go:294] "Starting daemon sets controller" logger="daemonset-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:01.641492       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:01.789671       1 controllermanager.go:765] "Started controller" controller="job-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:01.790209       1 job_controller.go:243] "Starting job controller" logger="job-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:01.790224       1 shared_informer.go:313] Waiting for caches to sync for job
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:01.939873       1 controllermanager.go:765] "Started controller" controller="ephemeral-volume-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:01.940295       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:01.940375       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.099155       1 controllermanager.go:765] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.099654       1 attach_detach_controller.go:338] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.099741       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.240427       1 controllermanager.go:765] "Started controller" controller="clusterrole-aggregation-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.240688       1 clusterroleaggregation_controller.go:194] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.240725       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.390343       1 controllermanager.go:765] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.390438       1 publisher.go:107] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.390450       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.539643       1 controllermanager.go:765] "Started controller" controller="ttl-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.539766       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.539778       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.691835       1 controllermanager.go:765] "Started controller" controller="bootstrap-signer-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.691969       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.739108       1 controllermanager.go:765] "Started controller" controller="taint-eviction-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.739143       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.739157       1 controllermanager.go:717] "Controller is disabled by a feature gate" controller="selinux-warning-controller" requiredFeatureGates=["SELinuxChangePolicy"]
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.739400       1 taint_eviction.go:281] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.739775       1 taint_eviction.go:287] "Sending events to api server" logger="taint-eviction-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.740069       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.890126       1 controllermanager.go:765] "Started controller" controller="deployment-controller"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.890235       1 deployment_controller.go:173] "Starting controller" logger="deployment-controller" controller="deployment"
	I0127 12:36:58.425412    9948 command_runner.go:130] ! I0127 12:12:02.890247       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0127 12:36:58.425947    9948 command_runner.go:130] ! I0127 12:12:03.040125       1 controllermanager.go:765] "Started controller" controller="statefulset-controller"
	I0127 12:36:58.425947    9948 command_runner.go:130] ! I0127 12:12:03.040770       1 stateful_set.go:166] "Starting stateful set controller" logger="statefulset-controller"
	I0127 12:36:58.425947    9948 command_runner.go:130] ! I0127 12:12:03.040983       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0127 12:36:58.426021    9948 command_runner.go:130] ! I0127 12:12:03.063768       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0127 12:36:58.426021    9948 command_runner.go:130] ! I0127 12:12:03.092877       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0127 12:36:58.426077    9948 command_runner.go:130] ! I0127 12:12:03.093448       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0127 12:36:58.426077    9948 command_runner.go:130] ! I0127 12:12:03.110720       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000\" does not exist"
	I0127 12:36:58.426144    9948 command_runner.go:130] ! I0127 12:12:03.126986       1 shared_informer.go:320] Caches are synced for endpoint
	I0127 12:36:58.426144    9948 command_runner.go:130] ! I0127 12:12:03.127087       1 shared_informer.go:320] Caches are synced for taint
	I0127 12:36:58.426174    9948 command_runner.go:130] ! I0127 12:12:03.127203       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0127 12:36:58.426216    9948 command_runner.go:130] ! I0127 12:12:03.127313       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000"
	I0127 12:36:58.426216    9948 command_runner.go:130] ! I0127 12:12:03.127524       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0127 12:36:58.426216    9948 command_runner.go:130] ! I0127 12:12:03.137503       1 shared_informer.go:320] Caches are synced for service account
	I0127 12:36:58.426283    9948 command_runner.go:130] ! I0127 12:12:03.137554       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:36:58.426283    9948 command_runner.go:130] ! I0127 12:12:03.138208       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0127 12:36:58.426283    9948 command_runner.go:130] ! I0127 12:12:03.138217       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0127 12:36:58.426283    9948 command_runner.go:130] ! I0127 12:12:03.138352       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0127 12:36:58.426347    9948 command_runner.go:130] ! I0127 12:12:03.141127       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0127 12:36:58.426373    9948 command_runner.go:130] ! I0127 12:12:03.141405       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0127 12:36:58.426373    9948 command_runner.go:130] ! I0127 12:12:03.141415       1 shared_informer.go:320] Caches are synced for TTL
	I0127 12:36:58.426373    9948 command_runner.go:130] ! I0127 12:12:03.141424       1 shared_informer.go:320] Caches are synced for ephemeral
	I0127 12:36:58.426427    9948 command_runner.go:130] ! I0127 12:12:03.141607       1 shared_informer.go:320] Caches are synced for daemon sets
	I0127 12:36:58.426451    9948 command_runner.go:130] ! I0127 12:12:03.141617       1 shared_informer.go:320] Caches are synced for stateful set
	I0127 12:36:58.426451    9948 command_runner.go:130] ! I0127 12:12:03.142442       1 shared_informer.go:320] Caches are synced for cronjob
	I0127 12:36:58.426451    9948 command_runner.go:130] ! I0127 12:12:03.146511       1 shared_informer.go:320] Caches are synced for persistent volume
	I0127 12:36:58.426506    9948 command_runner.go:130] ! I0127 12:12:03.150765       1 shared_informer.go:320] Caches are synced for expand
	I0127 12:36:58.426506    9948 command_runner.go:130] ! I0127 12:12:03.152122       1 shared_informer.go:320] Caches are synced for PVC protection
	I0127 12:36:58.426530    9948 command_runner.go:130] ! I0127 12:12:03.160180       1 shared_informer.go:320] Caches are synced for GC
	I0127 12:36:58.426530    9948 command_runner.go:130] ! I0127 12:12:03.164570       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:36:58.426585    9948 command_runner.go:130] ! I0127 12:12:03.170520       1 shared_informer.go:320] Caches are synced for namespace
	I0127 12:36:58.426585    9948 command_runner.go:130] ! I0127 12:12:03.185040       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0127 12:36:58.426585    9948 command_runner.go:130] ! I0127 12:12:03.186131       1 shared_informer.go:320] Caches are synced for HPA
	I0127 12:36:58.426585    9948 command_runner.go:130] ! I0127 12:12:03.188683       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0127 12:36:58.426648    9948 command_runner.go:130] ! I0127 12:12:03.191196       1 shared_informer.go:320] Caches are synced for crt configmap
	I0127 12:36:58.426648    9948 command_runner.go:130] ! I0127 12:12:03.192089       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0127 12:36:58.426648    9948 command_runner.go:130] ! I0127 12:12:03.192497       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0127 12:36:58.426648    9948 command_runner.go:130] ! I0127 12:12:03.192682       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0127 12:36:58.426648    9948 command_runner.go:130] ! I0127 12:12:03.192862       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0127 12:36:58.426648    9948 command_runner.go:130] ! I0127 12:12:03.193013       1 shared_informer.go:320] Caches are synced for garbage collector
	I0127 12:36:58.426648    9948 command_runner.go:130] ! I0127 12:12:03.193030       1 shared_informer.go:320] Caches are synced for job
	I0127 12:36:58.426648    9948 command_runner.go:130] ! I0127 12:12:03.193151       1 shared_informer.go:320] Caches are synced for deployment
	I0127 12:36:58.426884    9948 command_runner.go:130] ! I0127 12:12:03.193982       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0127 12:36:58.426913    9948 command_runner.go:130] ! I0127 12:12:03.194157       1 shared_informer.go:320] Caches are synced for node
	I0127 12:36:58.426913    9948 command_runner.go:130] ! I0127 12:12:03.194244       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0127 12:36:58.426913    9948 command_runner.go:130] ! I0127 12:12:03.194281       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0127 12:36:58.426913    9948 command_runner.go:130] ! I0127 12:12:03.194310       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0127 12:36:58.426981    9948 command_runner.go:130] ! I0127 12:12:03.194318       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0127 12:36:58.427009    9948 command_runner.go:130] ! I0127 12:12:03.194846       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0127 12:36:58.427009    9948 command_runner.go:130] ! I0127 12:12:03.196614       1 shared_informer.go:320] Caches are synced for resource quota
	I0127 12:36:58.427009    9948 command_runner.go:130] ! I0127 12:12:03.197111       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0127 12:36:58.427009    9948 command_runner.go:130] ! I0127 12:12:03.197095       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0127 12:36:58.427110    9948 command_runner.go:130] ! I0127 12:12:03.199168       1 shared_informer.go:320] Caches are synced for disruption
	I0127 12:36:58.427130    9948 command_runner.go:130] ! I0127 12:12:03.200153       1 shared_informer.go:320] Caches are synced for attach detach
	I0127 12:36:58.427130    9948 command_runner.go:130] ! I0127 12:12:03.207229       1 shared_informer.go:320] Caches are synced for PV protection
	I0127 12:36:58.427130    9948 command_runner.go:130] ! I0127 12:12:03.214016       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-659000" podCIDRs=["10.244.0.0/24"]
	I0127 12:36:58.427130    9948 command_runner.go:130] ! I0127 12:12:03.214057       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.427231    9948 command_runner.go:130] ! I0127 12:12:03.214083       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:03.216325       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:03.840748       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:04.356274       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="345.711056ms"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:04.454747       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="97.841105ms"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:04.534437       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="79.56576ms"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:04.576528       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="41.959673ms"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:04.576771       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="53.3µs"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:26.045035       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:26.074083       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:26.085407       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="109.3µs"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:26.129584       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="119.3µs"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:27.964629       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="49.302µs"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:28.020606       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="31.923176ms"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:28.020971       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="110.703µs"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:28.132341       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:12:29.790464       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:15:07.611410       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m02\" does not exist"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:15:07.630009       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-659000-m02" podCIDRs=["10.244.1.0/24"]
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:15:07.631297       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:15:07.631526       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:15:07.655401       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:15:07.883346       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:15:08.169505       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000-m02"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:15:08.255644       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.427260    9948 command_runner.go:130] ! I0127 12:15:08.418223       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.427835    9948 command_runner.go:130] ! I0127 12:15:17.811768       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.427835    9948 command_runner.go:130] ! I0127 12:15:36.752543       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:58.427835    9948 command_runner.go:130] ! I0127 12:15:36.753915       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:15:36.769807       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:15:38.199464       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:15:38.449749       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:16:02.550786       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="103.313802ms"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:16:02.585867       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="34.67067ms"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:16:02.586257       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="347.903µs"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:16:02.588870       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="48.6µs"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:16:05.434486       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="13.589639ms"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:16:05.435765       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="54.401µs"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:16:05.890170       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="9.003392ms"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:16:05.890477       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="36.901µs"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:16:09.305780       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:16:33.434322       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:19:26.820887       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:19:54.916460       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:19:54.917420       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m03\" does not exist"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:19:54.965530       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-659000-m03" podCIDRs=["10.244.2.0/24"]
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:19:54.966061       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:19:54.966297       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:19:55.802981       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:19:56.378698       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:19:58.252320       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000-m03"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:19:58.280410       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:20:05.560777       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.427902    9948 command_runner.go:130] ! I0127 12:20:25.959831       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.428470    9948 command_runner.go:130] ! I0127 12:20:28.750598       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.428470    9948 command_runner.go:130] ! I0127 12:20:28.751325       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:58.428575    9948 command_runner.go:130] ! I0127 12:20:28.769163       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.428575    9948 command_runner.go:130] ! I0127 12:20:33.279397       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.428575    9948 command_runner.go:130] ! I0127 12:23:26.795899       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.428575    9948 command_runner.go:130] ! I0127 12:24:32.956118       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.428662    9948 command_runner.go:130] ! I0127 12:25:42.001288       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.428691    9948 command_runner.go:130] ! I0127 12:28:32.628178       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:28:38.397672       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:28:38.399092       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:28:38.428451       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:28:43.510900       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:29:38.000555       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:30:52.866288       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:30:52.895359       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:30:58.140304       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:31:04.208510       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:31:04.209007       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m03\" does not exist"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:31:04.238560       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-659000-m03" podCIDRs=["10.244.3.0/24"]
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:31:04.238634       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! E0127 12:31:04.255963       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-659000-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-659000-m03" podCIDRs=["10.244.4.0/24"]
	I0127 12:36:58.428747    9948 command_runner.go:130] ! E0127 12:31:04.256068       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-659000-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-659000-m03"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! E0127 12:31:04.256109       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'multinode-659000-m03': failed to patch node CIDR: Node \"multinode-659000-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:31:04.256134       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:31:04.261242       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:31:04.513319       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.428747    9948 command_runner.go:130] ! I0127 12:31:05.081710       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.429345    9948 command_runner.go:130] ! I0127 12:31:08.523576       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.429394    9948 command_runner.go:130] ! I0127 12:31:14.394811       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.429394    9948 command_runner.go:130] ! I0127 12:31:22.407069       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:58.429394    9948 command_runner.go:130] ! I0127 12:31:22.407472       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.429394    9948 command_runner.go:130] ! I0127 12:31:22.419743       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.429394    9948 command_runner.go:130] ! I0127 12:31:23.498434       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.429394    9948 command_runner.go:130] ! I0127 12:33:08.544063       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:58.429394    9948 command_runner.go:130] ! I0127 12:33:08.544656       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.429394    9948 command_runner.go:130] ! I0127 12:33:08.574301       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.429394    9948 command_runner.go:130] ! I0127 12:33:13.661256       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:58.451155    9948 logs.go:123] Gathering logs for Docker ...
	I0127 12:36:58.451155    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0127 12:36:58.479838    9948 command_runner.go:130] > Jan 27 12:34:11 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0127 12:36:58.479838    9948 command_runner.go:130] > Jan 27 12:34:11 minikube cri-dockerd[223]: time="2025-01-27T12:34:11Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0127 12:36:58.479838    9948 command_runner.go:130] > Jan 27 12:34:11 minikube cri-dockerd[223]: time="2025-01-27T12:34:11Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0127 12:36:58.480028    9948 command_runner.go:130] > Jan 27 12:34:11 minikube cri-dockerd[223]: time="2025-01-27T12:34:11Z" level=info msg="Start docker client with request timeout 0s"
	I0127 12:36:58.480028    9948 command_runner.go:130] > Jan 27 12:34:11 minikube cri-dockerd[223]: time="2025-01-27T12:34:11Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0127 12:36:58.480028    9948 command_runner.go:130] > Jan 27 12:34:11 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:58.480099    9948 command_runner.go:130] > Jan 27 12:34:11 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0127 12:36:58.480099    9948 command_runner.go:130] > Jan 27 12:34:11 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0127 12:36:58.480099    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0127 12:36:58.480099    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0127 12:36:58.480099    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0127 12:36:58.480099    9948 command_runner.go:130] > Jan 27 12:34:14 minikube cri-dockerd[404]: time="2025-01-27T12:34:14Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0127 12:36:58.480099    9948 command_runner.go:130] > Jan 27 12:34:14 minikube cri-dockerd[404]: time="2025-01-27T12:34:14Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0127 12:36:58.480099    9948 command_runner.go:130] > Jan 27 12:34:14 minikube cri-dockerd[404]: time="2025-01-27T12:34:14Z" level=info msg="Start docker client with request timeout 0s"
	I0127 12:36:58.480231    9948 command_runner.go:130] > Jan 27 12:34:14 minikube cri-dockerd[404]: time="2025-01-27T12:34:14Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0127 12:36:58.480231    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:58.480231    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0127 12:36:58.480231    9948 command_runner.go:130] > Jan 27 12:34:14 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0127 12:36:58.480302    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0127 12:36:58.480330    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0127 12:36:58.480330    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0127 12:36:58.480330    9948 command_runner.go:130] > Jan 27 12:34:16 minikube cri-dockerd[425]: time="2025-01-27T12:34:16Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0127 12:36:58.480330    9948 command_runner.go:130] > Jan 27 12:34:16 minikube cri-dockerd[425]: time="2025-01-27T12:34:16Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0127 12:36:58.480429    9948 command_runner.go:130] > Jan 27 12:34:16 minikube cri-dockerd[425]: time="2025-01-27T12:34:16Z" level=info msg="Start docker client with request timeout 0s"
	I0127 12:36:58.480453    9948 command_runner.go:130] > Jan 27 12:34:16 minikube cri-dockerd[425]: time="2025-01-27T12:34:16Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0127 12:36:58.480478    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0127 12:36:58.480478    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:34:16 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:34:18 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 systemd[1]: Starting Docker Application Container Engine...
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[653]: time="2025-01-27T12:35:01.316616305Z" level=info msg="Starting up"
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[653]: time="2025-01-27T12:35:01.317424338Z" level=info msg="containerd not running, starting managed containerd"
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[653]: time="2025-01-27T12:35:01.318870498Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=659
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.350184287Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374094572Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374181575Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374315681Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0127 12:36:58.480554    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374337282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:58.481149    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374861203Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:58.481149    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.374889804Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:58.481240    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375040811Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:58.481240    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375239819Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:58.481308    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375267320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:58.481361    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375281220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:58.481361    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.375833643Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:58.481361    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.376559373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:58.481361    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.379449292Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:58.481361    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.379538296Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:58.483009    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.379661901Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:58.483009    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.379800807Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0127 12:36:58.483106    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.380313228Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0127 12:36:58.483106    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.380441533Z" level=info msg="metadata content store policy set" policy=shared
	I0127 12:36:58.483106    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.385960360Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0127 12:36:58.483219    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386099266Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0127 12:36:58.483246    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386121867Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0127 12:36:58.483246    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386137768Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0127 12:36:58.483307    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386151968Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0127 12:36:58.483348    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386229971Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386475981Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386600687Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386685890Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386757893Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386815695Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386833196Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386854497Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386882698Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386897399Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386908999Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386920500Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386931000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386948401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.386962701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387079606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387099107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387131708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387149509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387164010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387179110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387194311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.483378    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387212812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.483913    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387227412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.483913    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387242613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.483957    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387257314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.483957    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387275514Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0127 12:36:58.483957    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387300315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.483957    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387352418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.484041    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387385019Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0127 12:36:58.484092    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387423920Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0127 12:36:58.484132    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387443921Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0127 12:36:58.484132    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387454422Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0127 12:36:58.484175    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387465222Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0127 12:36:58.484239    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387473923Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387486423Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.387496523Z" level=info msg="NRI interface is disabled by configuration."
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.388077647Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.388176351Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.388221553Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:01 multinode-659000 dockerd[659]: time="2025-01-27T12:35:01.388239554Z" level=info msg="containerd successfully booted in 0.040630s"
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:02 multinode-659000 dockerd[653]: time="2025-01-27T12:35:02.375461301Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:02 multinode-659000 dockerd[653]: time="2025-01-27T12:35:02.619440119Z" level=info msg="Loading containers: start."
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:02 multinode-659000 dockerd[653]: time="2025-01-27T12:35:02.931712674Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.079754338Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.199112944Z" level=info msg="Loading containers: done."
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.227370410Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.227394111Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.227415612Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.227924231Z" level=info msg="Daemon has completed initialization"
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.267619030Z" level=info msg="API listen on /var/run/docker.sock"
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 dockerd[653]: time="2025-01-27T12:35:03.267851638Z" level=info msg="API listen on [::]:2376"
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:03 multinode-659000 systemd[1]: Started Docker Application Container Engine.
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.208684124Z" level=info msg="Processing signal 'terminated'"
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.210887831Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.211188432Z" level=info msg="Daemon shutdown complete"
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.211249132Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0127 12:36:58.484302    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 dockerd[653]: time="2025-01-27T12:35:28.211349733Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0127 12:36:58.484837    9948 command_runner.go:130] > Jan 27 12:35:28 multinode-659000 systemd[1]: Stopping Docker Application Container Engine...
	I0127 12:36:58.484837    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 systemd[1]: docker.service: Deactivated successfully.
	I0127 12:36:58.484886    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 systemd[1]: Stopped Docker Application Container Engine.
	I0127 12:36:58.484886    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 systemd[1]: Starting Docker Application Container Engine...
	I0127 12:36:58.484886    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:29.270852796Z" level=info msg="Starting up"
	I0127 12:36:58.484940    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:29.271817099Z" level=info msg="containerd not running, starting managed containerd"
	I0127 12:36:58.484940    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:29.272921603Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1109
	I0127 12:36:58.484940    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.304741210Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	I0127 12:36:58.485024    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329258592Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0127 12:36:58.485024    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329353092Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0127 12:36:58.485082    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329390892Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0127 12:36:58.485105    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329406192Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329428593Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329441293Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329563193Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329667793Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329687993Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329698693Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329723194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.329854194Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.332844104Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.332945004Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333117005Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333187905Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333222205Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333244905Z" level=info msg="metadata content store policy set" policy=shared
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333669407Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333741907Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333760007Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333804107Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333825507Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.333876808Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334348509Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334487410Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334670410Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334694510Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0127 12:36:58.485134    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334722510Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.485667    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334740210Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.485707    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334754110Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.485707    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334768211Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.485707    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334783611Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.485707    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334797111Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334827611Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334839711Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334900511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334918411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334939711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334956111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.334972911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335000311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335303412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335328412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335345712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335365113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335379713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335394013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335408713Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335432513Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335458213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335473813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335509613Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335706914Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335751914Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335766514Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335779214Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335790814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335808914Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.335823714Z" level=info msg="NRI interface is disabled by configuration."
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.336050915Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0127 12:36:58.488982    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.336227915Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.336312916Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:29 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:29.336356016Z" level=info msg="containerd successfully booted in 0.033394s"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.313483202Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.352802934Z" level=info msg="Loading containers: start."
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.586901421Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.690006868Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.804531453Z" level=info msg="Loading containers: done."
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.832567747Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.832684748Z" level=info msg="Daemon has completed initialization"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.868895669Z" level=info msg="API listen on /var/run/docker.sock"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 systemd[1]: Started Docker Application Container Engine.
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:30 multinode-659000 dockerd[1103]: time="2025-01-27T12:35:30.869822273Z" level=info msg="API listen on [::]:2376"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Starting cri-dockerd 0.3.15 (c1c566e)"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Start docker client with request timeout 0s"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Loaded network plugin cni"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:31Z" level=info msg="Start cri-dockerd grpc backend"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:31 multinode-659000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:36Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-58667487b6-2jq9j_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"4c82c0ec4aeaa9b21462a8248326ae982d6f7a0aee31347f1a58d216f0335177\""
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:36 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:36Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-668d6bf9bc-2qw6w_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"4a53e133a1cd6ab9514cb15ac3c4f1d5683d17008b482cebb08bf4809e060709\""
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.148610487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.149713190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.149731191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.149823291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.227312151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.227946754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.228465355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.229058857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b770a357d98307d140bf1525f91cca5fa9278f7f9428b9b956db31e6a36de7f2/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.326758786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.326897686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.327082287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.327397788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.340486032Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.340542232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.340557232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.340640833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/910315897d84204b3db03c56eaeac0c855a23f6250a406220a840c10e2dad7a7/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5601285bb260a8ced44a77e9dbb10f08580841c917885470ec5941525f08ee76/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cdf534e99b2bbcc52d3bf2ce73ef5d4299b5264cf0a050fa21ff7f6fe2bb3b2a/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.671974447Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.489980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.672075247Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.672094947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.673787353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.761333147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.761791949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.761989149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.763491554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.875104030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.875307231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.879314144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.879751245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.905404632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.905473732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.905487532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:37 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:37.905580032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:41 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:41Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:42.944884578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:42.944962279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:42.944975379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:42 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:42.945417180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.028307259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.028541060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.028779960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.029212562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.033020375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.033338176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.033463276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.033775977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/34d579bb511fec290478f20b13002063b43c1a71bd6f2f45f1d83bbd8ac971ab/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b613e9a7a356580fd5381e358408317fd6120a119c23f3f196adda302e5ca97f/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:35:43Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d43e4cc62e0877d4b65191623d58195cd33c60eff33c6e49e605f69620d5115f/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.564400062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.564959364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.565260665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.565864167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.593549260Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.594548363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.594809964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.595677067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.831064858Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.831237859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.831252459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:35:43 multinode-659000 dockerd[1109]: time="2025-01-27T12:35:43.831462360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:14.113708902Z" level=info msg="shim disconnected" id=9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f namespace=moby
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:14.113811702Z" level=warning msg="cleaning up after shim disconnected" id=9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f namespace=moby
	I0127 12:36:58.490980    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:14.113825002Z" level=info msg="cleaning up dead shim" namespace=moby
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:14 multinode-659000 dockerd[1103]: time="2025-01-27T12:36:14.115914814Z" level=info msg="ignoring event" container=9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:27.602318882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:27.604079090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:27.604098490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:27 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:27.604656892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.795612113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.795786714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.796654617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.796995818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.861006350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.861082751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.861094651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.861334452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:46 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:36:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6b22dbb5ef3e0d283203499fffad001c9c20c643564a55e5bfa5d6352f80e178/resolv.conf as [nameserver 172.29.192.1]"
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:36:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ef504f99724cba01531b3894329439ae069a4ccac272e31bfac333cc24e62c53/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.321502068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.321825070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.321903471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.322491776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.384958874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.385201176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.385326577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.491980    9948 command_runner.go:130] > Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.385735080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0127 12:36:58.519981    9948 logs.go:123] Gathering logs for etcd [0ef2a3b50bae] ...
	I0127 12:36:58.519981    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0ef2a3b50bae"
	I0127 12:36:58.549005    9948 command_runner.go:130] ! {"level":"warn","ts":"2025-01-27T12:35:38.248296Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0127 12:36:58.549005    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.248523Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.29.198.106:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.29.198.106:2380","--initial-cluster=multinode-659000=https://172.29.198.106:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.29.198.106:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.29.198.106:2380","--name=multinode-659000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","-
-proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0127 12:36:58.549005    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.249804Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0127 12:36:58.549005    9948 command_runner.go:130] ! {"level":"warn","ts":"2025-01-27T12:35:38.249933Z","caller":"embed/config.go:689","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0127 12:36:58.549005    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.249951Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://172.29.198.106:2380"]}
	I0127 12:36:58.549005    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.250358Z","caller":"embed/etcd.go:497","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0127 12:36:58.549005    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.255871Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.29.198.106:2379"]}
	I0127 12:36:58.549005    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.258341Z","caller":"embed/etcd.go:311","msg":"starting an etcd server","etcd-version":"3.5.16","git-sha":"f20bbad","go-version":"go1.22.7","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-659000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.29.198.106:2380"],"listen-peer-urls":["https://172.29.198.106:2380"],"advertise-client-urls":["https://172.29.198.106:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.198.106:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initi
al-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.282453Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"23.428079ms"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.322950Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.352706Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"d020e240c474bd89","local-member-id":"925e6945be3a5b5b","commit-index":2090}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.354002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b switched to configuration voters=()"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.354079Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b became follower at term 2"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.354103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 925e6945be3a5b5b [peers: [], term: 2, commit: 2090, applied: 0, lastindex: 2090, lastterm: 2]"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"warn","ts":"2025-01-27T12:35:38.367343Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.371532Z","caller":"mvcc/kvstore.go:346","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1388}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.377112Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":1808}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.386775Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.395908Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"925e6945be3a5b5b","timeout":"7s"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.396497Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"925e6945be3a5b5b"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.396684Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"925e6945be3a5b5b","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.396970Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.399309Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.401105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b switched to configuration voters=(10546983125613435739)"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.400045Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.404834Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.404888Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.405566Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d020e240c474bd89","local-member-id":"925e6945be3a5b5b","added-peer-id":"925e6945be3a5b5b","added-peer-peer-urls":["https://172.29.204.17:2380"]}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.405716Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d020e240c474bd89","local-member-id":"925e6945be3a5b5b","cluster-version":"3.5"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.405754Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.407643Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.408091Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"925e6945be3a5b5b","initial-advertise-peer-urls":["https://172.29.198.106:2380"],"listen-peer-urls":["https://172.29.198.106:2380"],"advertise-client-urls":["https://172.29.198.106:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.198.106:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.408386Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.408686Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"172.29.198.106:2380"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:38.408809Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"172.29.198.106:2380"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.355207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b is starting a new election at term 2"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.355615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b became pre-candidate at term 2"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.355770Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b received MsgPreVoteResp from 925e6945be3a5b5b at term 2"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.355926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b became candidate at term 3"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.356088Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b received MsgVoteResp from 925e6945be3a5b5b at term 3"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.356235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b became leader at term 3"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.356449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 925e6945be3a5b5b elected leader 925e6945be3a5b5b at term 3"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.368540Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"925e6945be3a5b5b","local-member-attributes":"{Name:multinode-659000 ClientURLs:[https://172.29.198.106:2379]}","request-path":"/0/members/925e6945be3a5b5b/attributes","cluster-id":"d020e240c474bd89","publish-timeout":"7s"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.369045Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0127 12:36:58.549991    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.371833Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0127 12:36:58.551004    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.372238Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0127 12:36:58.551004    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.374158Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0127 12:36:58.551004    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.383680Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0127 12:36:58.551004    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.391404Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	I0127 12:36:58.551004    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.392982Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.29.198.106:2379"}
	I0127 12:36:58.551004    9948 command_runner.go:130] ! {"level":"info","ts":"2025-01-27T12:35:39.399505Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0127 12:36:58.556995    9948 logs.go:123] Gathering logs for kube-scheduler [ed51c7eaa966] ...
	I0127 12:36:58.556995    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ed51c7eaa966"
	I0127 12:36:58.583234    9948 command_runner.go:130] ! I0127 12:35:39.285954       1 serving.go:386] Generated self-signed cert in-memory
	I0127 12:36:58.583234    9948 command_runner.go:130] ! W0127 12:35:41.361191       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0127 12:36:58.583234    9948 command_runner.go:130] ! W0127 12:35:41.363231       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0127 12:36:58.583234    9948 command_runner.go:130] ! W0127 12:35:41.363467       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0127 12:36:58.583234    9948 command_runner.go:130] ! W0127 12:35:41.363598       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 12:36:58.583234    9948 command_runner.go:130] ! I0127 12:35:41.458309       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 12:36:58.583234    9948 command_runner.go:130] ! I0127 12:35:41.458594       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:58.583234    9948 command_runner.go:130] ! I0127 12:35:41.465036       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 12:36:58.583234    9948 command_runner.go:130] ! I0127 12:35:41.465587       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0127 12:36:58.583234    9948 command_runner.go:130] ! I0127 12:35:41.466480       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:36:58.583234    9948 command_runner.go:130] ! I0127 12:35:41.466554       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:58.583234    9948 command_runner.go:130] ! I0127 12:35:41.567642       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:36:58.587252    9948 logs.go:123] Gathering logs for kube-scheduler [a16e06a03860] ...
	I0127 12:36:58.587252    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a16e06a03860"
	I0127 12:36:58.618437    9948 command_runner.go:130] ! I0127 12:11:54.280431       1 serving.go:386] Generated self-signed cert in-memory
	I0127 12:36:58.618543    9948 command_runner.go:130] ! W0127 12:11:55.581187       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0127 12:36:58.618593    9948 command_runner.go:130] ! W0127 12:11:55.581309       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0127 12:36:58.618593    9948 command_runner.go:130] ! W0127 12:11:55.581382       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0127 12:36:58.618593    9948 command_runner.go:130] ! W0127 12:11:55.581390       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 12:36:58.618593    9948 command_runner.go:130] ! I0127 12:11:55.694969       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 12:36:58.618593    9948 command_runner.go:130] ! I0127 12:11:55.695193       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:36:58.618593    9948 command_runner.go:130] ! I0127 12:11:55.700077       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0127 12:36:58.618593    9948 command_runner.go:130] ! I0127 12:11:55.700446       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 12:36:58.618593    9948 command_runner.go:130] ! I0127 12:11:55.700992       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:36:58.618593    9948 command_runner.go:130] ! I0127 12:11:55.701410       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:36:58.618593    9948 command_runner.go:130] ! W0127 12:11:55.715521       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:58.618593    9948 command_runner.go:130] ! E0127 12:11:55.717196       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.618593    9948 command_runner.go:130] ! W0127 12:11:55.717649       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0127 12:36:58.618593    9948 command_runner.go:130] ! E0127 12:11:55.717921       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.618593    9948 command_runner.go:130] ! W0127 12:11:55.718583       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0127 12:36:58.618593    9948 command_runner.go:130] ! E0127 12:11:55.718820       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.618593    9948 command_runner.go:130] ! W0127 12:11:55.728298       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0127 12:36:58.618593    9948 command_runner.go:130] ! E0127 12:11:55.728648       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0127 12:36:58.618593    9948 command_runner.go:130] ! W0127 12:11:55.729000       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0127 12:36:58.618593    9948 command_runner.go:130] ! E0127 12:11:55.729243       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.618593    9948 command_runner.go:130] ! W0127 12:11:55.729633       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0127 12:36:58.619119    9948 command_runner.go:130] ! E0127 12:11:55.730380       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.619162    9948 command_runner.go:130] ! W0127 12:11:55.729677       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0127 12:36:58.619197    9948 command_runner.go:130] ! E0127 12:11:55.730837       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.619197    9948 command_runner.go:130] ! W0127 12:11:55.729713       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0127 12:36:58.619197    9948 command_runner.go:130] ! W0127 12:11:55.729749       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:58.619197    9948 command_runner.go:130] ! E0127 12:11:55.731479       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.619197    9948 command_runner.go:130] ! W0127 12:11:55.729782       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:58.619979    9948 command_runner.go:130] ! E0127 12:11:55.732242       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.619979    9948 command_runner.go:130] ! W0127 12:11:55.729811       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:58.620042    9948 command_runner.go:130] ! E0127 12:11:55.734240       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.620137    9948 command_runner.go:130] ! E0127 12:11:55.734704       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.620167    9948 command_runner.go:130] ! W0127 12:11:55.738077       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0127 12:36:58.620167    9948 command_runner.go:130] ! E0127 12:11:55.738873       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.620167    9948 command_runner.go:130] ! W0127 12:11:55.739202       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0127 12:36:58.620167    9948 command_runner.go:130] ! E0127 12:11:55.739366       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.620167    9948 command_runner.go:130] ! W0127 12:11:55.739719       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0127 12:36:58.620167    9948 command_runner.go:130] ! E0127 12:11:55.739865       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.620167    9948 command_runner.go:130] ! W0127 12:11:55.740221       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0127 12:36:58.620167    9948 command_runner.go:130] ! E0127 12:11:55.740378       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.620167    9948 command_runner.go:130] ! W0127 12:11:55.740608       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:58.620167    9948 command_runner.go:130] ! E0127 12:11:55.740761       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.620167    9948 command_runner.go:130] ! W0127 12:11:56.556598       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0127 12:36:58.620167    9948 command_runner.go:130] ! E0127 12:11:56.557622       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.620167    9948 command_runner.go:130] ! W0127 12:11:56.595830       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:58.620167    9948 command_runner.go:130] ! E0127 12:11:56.596047       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.620694    9948 command_runner.go:130] ! W0127 12:11:56.691826       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0127 12:36:58.620694    9948 command_runner.go:130] ! E0127 12:11:56.691909       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0127 12:36:58.620825    9948 command_runner.go:130] ! W0127 12:11:56.806048       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:58.620914    9948 command_runner.go:130] ! E0127 12:11:56.806109       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.620938    9948 command_runner.go:130] ! W0127 12:11:56.846817       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0127 12:36:58.620989    9948 command_runner.go:130] ! E0127 12:11:56.847194       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.621027    9948 command_runner.go:130] ! W0127 12:11:56.871314       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0127 12:36:58.621027    9948 command_runner.go:130] ! E0127 12:11:56.872178       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.621027    9948 command_runner.go:130] ! W0127 12:11:56.887386       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0127 12:36:58.621027    9948 command_runner.go:130] ! E0127 12:11:56.887549       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.621027    9948 command_runner.go:130] ! W0127 12:11:56.918642       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0127 12:36:58.621027    9948 command_runner.go:130] ! E0127 12:11:56.919135       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.621027    9948 command_runner.go:130] ! W0127 12:11:57.039216       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:58.621027    9948 command_runner.go:130] ! E0127 12:11:57.039707       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.621027    9948 command_runner.go:130] ! W0127 12:11:57.055169       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0127 12:36:58.621027    9948 command_runner.go:130] ! E0127 12:11:57.055233       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.621027    9948 command_runner.go:130] ! W0127 12:11:57.106656       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0127 12:36:58.621027    9948 command_runner.go:130] ! E0127 12:11:57.106828       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.621027    9948 command_runner.go:130] ! W0127 12:11:57.214186       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:58.621027    9948 command_runner.go:130] ! E0127 12:11:57.214290       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.621027    9948 command_runner.go:130] ! W0127 12:11:57.298150       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0127 12:36:58.621027    9948 command_runner.go:130] ! E0127 12:11:57.298337       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.621551    9948 command_runner.go:130] ! W0127 12:11:57.310098       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0127 12:36:58.621625    9948 command_runner.go:130] ! E0127 12:11:57.310312       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.621625    9948 command_runner.go:130] ! W0127 12:11:57.312117       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	I0127 12:36:58.621736    9948 command_runner.go:130] ! E0127 12:11:57.312192       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.621736    9948 command_runner.go:130] ! W0127 12:11:57.321525       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0127 12:36:58.621804    9948 command_runner.go:130] ! E0127 12:11:57.321832       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:36:58.621804    9948 command_runner.go:130] ! I0127 12:11:59.701790       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:36:58.621804    9948 command_runner.go:130] ! I0127 12:33:15.443053       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0127 12:36:58.621878    9948 command_runner.go:130] ! I0127 12:33:15.443143       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0127 12:36:58.621905    9948 command_runner.go:130] ! I0127 12:33:15.452458       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 12:36:58.621933    9948 command_runner.go:130] ! E0127 12:33:15.487412       1 run.go:72] "command failed" err="finished without leader elect"
	I0127 12:36:58.632177    9948 logs.go:123] Gathering logs for kindnet [373bec67270f] ...
	I0127 12:36:58.632177    9948 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 373bec67270f"
	I0127 12:36:58.657220    9948 command_runner.go:130] ! I0127 12:35:44.464092       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0127 12:36:58.657220    9948 command_runner.go:130] ! I0127 12:35:44.489651       1 main.go:139] hostIP = 172.29.198.106
	I0127 12:36:58.657220    9948 command_runner.go:130] ! podIP = 172.29.198.106
	I0127 12:36:58.657514    9948 command_runner.go:130] ! I0127 12:35:44.489794       1 main.go:148] setting mtu 1500 for CNI 
	I0127 12:36:58.657514    9948 command_runner.go:130] ! I0127 12:35:44.489865       1 main.go:178] kindnetd IP family: "ipv4"
	I0127 12:36:58.657514    9948 command_runner.go:130] ! I0127 12:35:44.490024       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0127 12:36:58.657514    9948 command_runner.go:130] ! I0127 12:35:45.397363       1 main.go:239] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-40: Error: Could not process rule: Operation not supported
	I0127 12:36:58.657514    9948 command_runner.go:130] ! add table inet kindnet-network-policies
	I0127 12:36:58.657514    9948 command_runner.go:130] ! ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	I0127 12:36:58.657601    9948 command_runner.go:130] ! , skipping network policies
	I0127 12:36:58.657630    9948 command_runner.go:130] ! W0127 12:36:15.407551       1 reflector.go:561] pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0127 12:36:58.657630    9948 command_runner.go:130] ! E0127 12:36:15.407870       1 reflector.go:158] "Unhandled Error" err="pkg/mod/k8s.io/client-go@v0.31.1/tools/cache/reflector.go:243: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError"
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:25.405793       1 main.go:297] Handling node with IPs: map[172.29.198.106:{}]
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:25.405967       1 main.go:301] handling current node
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:25.406822       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:25.406903       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:25.408014       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.29.199.129 Flags: [] Table: 0 Realm: 0} 
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:25.408956       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:25.409055       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:25.409321       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.29.206.88 Flags: [] Table: 0 Realm: 0} 
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:35.400986       1 main.go:297] Handling node with IPs: map[172.29.198.106:{}]
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:35.401115       1 main.go:301] handling current node
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:35.401203       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:35.401377       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:35.401789       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:35.401927       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:45.400837       1 main.go:297] Handling node with IPs: map[172.29.198.106:{}]
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:45.401002       1 main.go:301] handling current node
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:45.401061       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:45.401072       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:45.401385       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:45.401462       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:55.406998       1 main.go:297] Handling node with IPs: map[172.29.198.106:{}]
	I0127 12:36:58.657699    9948 command_runner.go:130] ! I0127 12:36:55.407153       1 main.go:301] handling current node
	I0127 12:36:58.658201    9948 command_runner.go:130] ! I0127 12:36:55.407182       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:36:58.658201    9948 command_runner.go:130] ! I0127 12:36:55.407192       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:36:58.658201    9948 command_runner.go:130] ! I0127 12:36:55.407535       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:36:58.658201    9948 command_runner.go:130] ! I0127 12:36:55.407746       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:36:58.661181    9948 logs.go:123] Gathering logs for container status ...
	I0127 12:36:58.661181    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0127 12:36:58.721183    9948 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0127 12:36:58.721183    9948 command_runner.go:130] > 528243cca8bfb       8c811b4aec35f                                                                                         11 seconds ago       Running             busybox                   1                   ef504f99724cb       busybox-58667487b6-2jq9j
	I0127 12:36:58.721183    9948 command_runner.go:130] > b3a9ed6e130c0       c69fa2e9cbf5f                                                                                         11 seconds ago       Running             coredns                   1                   6b22dbb5ef3e0       coredns-668d6bf9bc-2qw6w
	I0127 12:36:58.721183    9948 command_runner.go:130] > 389606c183b19       6e38f40d628db                                                                                         31 seconds ago       Running             storage-provisioner       2                   b613e9a7a3565       storage-provisioner
	I0127 12:36:58.721183    9948 command_runner.go:130] > 373bec67270fb       50415e5d05f05                                                                                         About a minute ago   Running             kindnet-cni               1                   d43e4cc62e087       kindnet-z2hqq
	I0127 12:36:58.721183    9948 command_runner.go:130] > 9b2db1d0cb61c       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   b613e9a7a3565       storage-provisioner
	I0127 12:36:58.721183    9948 command_runner.go:130] > 0283b35dee3cc       e29f9c7391fd9                                                                                         About a minute ago   Running             kube-proxy                1                   34d579bb511fe       kube-proxy-s46mv
	I0127 12:36:58.721183    9948 command_runner.go:130] > ea993630a3109       95c0bda56fc4d                                                                                         About a minute ago   Running             kube-apiserver            0                   5601285bb260a       kube-apiserver-multinode-659000
	I0127 12:36:58.721183    9948 command_runner.go:130] > 0ef2a3b50bae8       a9e7e6b294baf                                                                                         About a minute ago   Running             etcd                      0                   cdf534e99b2bb       etcd-multinode-659000
	I0127 12:36:58.721183    9948 command_runner.go:130] > ed51c7eaa9666       2b0d6572d062c                                                                                         About a minute ago   Running             kube-scheduler            1                   910315897d842       kube-scheduler-multinode-659000
	I0127 12:36:58.721183    9948 command_runner.go:130] > 8d4872cda28de       019ee182b58e2                                                                                         About a minute ago   Running             kube-controller-manager   1                   b770a357d9830       kube-controller-manager-multinode-659000
	I0127 12:36:58.721183    9948 command_runner.go:130] > 998a64b2baa2d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago       Exited              busybox                   0                   4c82c0ec4aeaa       busybox-58667487b6-2jq9j
	I0127 12:36:58.721183    9948 command_runner.go:130] > f818dd15d8b02       c69fa2e9cbf5f                                                                                         24 minutes ago       Exited              coredns                   0                   4a53e133a1cd6       coredns-668d6bf9bc-2qw6w
	I0127 12:36:58.721183    9948 command_runner.go:130] > d758000dda95d       kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108              24 minutes ago       Exited              kindnet-cni               0                   f2d0bd65fe50d       kindnet-z2hqq
	I0127 12:36:58.721183    9948 command_runner.go:130] > bbec7ccef7da5       e29f9c7391fd9                                                                                         24 minutes ago       Exited              kube-proxy                0                   319cddeebceb6       kube-proxy-s46mv
	I0127 12:36:58.721183    9948 command_runner.go:130] > a16e06a038601       2b0d6572d062c                                                                                         25 minutes ago       Exited              kube-scheduler            0                   5423fc5113290       kube-scheduler-multinode-659000
	I0127 12:36:58.721183    9948 command_runner.go:130] > e07a66f8f6196       019ee182b58e2                                                                                         25 minutes ago       Exited              kube-controller-manager   0                   1bd5bf99bede3       kube-controller-manager-multinode-659000
	I0127 12:37:01.224548    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods
	I0127 12:37:01.224548    9948 round_trippers.go:469] Request Headers:
	I0127 12:37:01.224548    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:37:01.224548    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:37:01.235963    9948 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0127 12:37:01.235963    9948 round_trippers.go:577] Response Headers:
	I0127 12:37:01.235963    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:37:01.235963    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:37:01 GMT
	I0127 12:37:01.235963    9948 round_trippers.go:580]     Audit-Id: ea5a0f6d-fc63-43ff-bbfd-7fc2ef1e13dd
	I0127 12:37:01.235963    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:37:01.235963    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:37:01.235963    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:37:01.238848    9948 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2041"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"2024","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90382 chars]
	I0127 12:37:01.243235    9948 system_pods.go:59] 12 kube-system pods found
	I0127 12:37:01.243269    9948 system_pods.go:61] "coredns-668d6bf9bc-2qw6w" [8f0367fc-d842-4cc3-8e71-30869a548243] Running
	I0127 12:37:01.243269    9948 system_pods.go:61] "etcd-multinode-659000" [4c33fa42-51a7-4a7a-a497-cce80b8773d6] Running
	I0127 12:37:01.243269    9948 system_pods.go:61] "kindnet-kpfjt" [b00e6ead-b072-40b5-9c87-7697316d8107] Running
	I0127 12:37:01.243269    9948 system_pods.go:61] "kindnet-n7vjl" [23617db6-b970-4ead-845b-69776d50ffef] Running
	I0127 12:37:01.243269    9948 system_pods.go:61] "kindnet-z2hqq" [9b617a9c-e2b8-45fd-bee2-45cb03d4cd42] Running
	I0127 12:37:01.243269    9948 system_pods.go:61] "kube-apiserver-multinode-659000" [8fbee94f-fd8f-4431-bd9f-b75d49cb19d4] Running
	I0127 12:37:01.243269    9948 system_pods.go:61] "kube-controller-manager-multinode-659000" [8be02f36-161c-44f3-b526-56db3b8a007a] Running
	I0127 12:37:01.243269    9948 system_pods.go:61] "kube-proxy-pjhc8" [ddb6698c-b83d-4a49-9672-c894e87cbb66] Running
	I0127 12:37:01.243269    9948 system_pods.go:61] "kube-proxy-s46mv" [ae3b8daf-d674-4cfe-8652-cb5ff6ba8615] Running
	I0127 12:37:01.243269    9948 system_pods.go:61] "kube-proxy-sk5js" [ba679e1d-713c-4bd4-b267-2b887c1ac4df] Running
	I0127 12:37:01.243444    9948 system_pods.go:61] "kube-scheduler-multinode-659000" [52b91964-a331-4925-9e07-c8df32b4176d] Running
	I0127 12:37:01.243444    9948 system_pods.go:61] "storage-provisioner" [bcfd7913-1bc0-4c24-882f-2be92ec9b046] Running
	I0127 12:37:01.243475    9948 system_pods.go:74] duration metric: took 3.7146405s to wait for pod list to return data ...
	I0127 12:37:01.243475    9948 default_sa.go:34] waiting for default service account to be created ...
	I0127 12:37:01.243680    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/default/serviceaccounts
	I0127 12:37:01.243722    9948 round_trippers.go:469] Request Headers:
	I0127 12:37:01.243722    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:37:01.243722    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:37:01.247432    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:37:01.247432    9948 round_trippers.go:577] Response Headers:
	I0127 12:37:01.247432    9948 round_trippers.go:580]     Audit-Id: 747ffff5-82fc-4ca7-b092-f9df2bbbeae0
	I0127 12:37:01.248162    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:37:01.248162    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:37:01.248162    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:37:01.248162    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:37:01.248162    9948 round_trippers.go:580]     Content-Length: 262
	I0127 12:37:01.248162    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:37:01 GMT
	I0127 12:37:01.248210    9948 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"2041"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"bff364bd-d78f-41e4-90bc-c2009fb4813f","resourceVersion":"328","creationTimestamp":"2025-01-27T12:12:03Z"}}]}
	I0127 12:37:01.248428    9948 default_sa.go:45] found service account: "default"
	I0127 12:37:01.248428    9948 default_sa.go:55] duration metric: took 4.9532ms for default service account to be created ...
	I0127 12:37:01.248428    9948 system_pods.go:137] waiting for k8s-apps to be running ...
	I0127 12:37:01.248428    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods
	I0127 12:37:01.248428    9948 round_trippers.go:469] Request Headers:
	I0127 12:37:01.248428    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:37:01.248428    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:37:01.256117    9948 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0127 12:37:01.256117    9948 round_trippers.go:577] Response Headers:
	I0127 12:37:01.256186    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:37:01.256186    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:37:01.256186    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:37:01.256221    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:37:01 GMT
	I0127 12:37:01.256221    9948 round_trippers.go:580]     Audit-Id: 71aafcdd-9018-43fc-bcd3-215a8cc752ff
	I0127 12:37:01.256221    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:37:01.257749    9948 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2041"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"2024","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 90382 chars]
	I0127 12:37:01.262220    9948 system_pods.go:87] 12 kube-system pods found
	I0127 12:37:01.262411    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-dns
	I0127 12:37:01.262442    9948 round_trippers.go:469] Request Headers:
	I0127 12:37:01.262442    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:37:01.262442    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:37:01.265501    9948 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0127 12:37:01.265588    9948 round_trippers.go:577] Response Headers:
	I0127 12:37:01.265588    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:37:01 GMT
	I0127 12:37:01.265622    9948 round_trippers.go:580]     Audit-Id: 5d6e4d9b-4f40-48c1-8fbb-5d22b550192a
	I0127 12:37:01.265622    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:37:01.265622    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:37:01.265622    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:37:01.265622    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:37:01.265933    9948 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2041"},"items":[{"metadata":{"name":"coredns-668d6bf9bc-2qw6w","generateName":"coredns-668d6bf9bc-","namespace":"kube-system","uid":"8f0367fc-d842-4cc3-8e71-30869a548243","resourceVersion":"2024","creationTimestamp":"2025-01-27T12:12:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"668d6bf9bc"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-668d6bf9bc","uid":"c804286c-b5a0-420d-b02a-22ff4523cf5e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2025-01-27T12:12:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c804286c-b5a0-420d-b02a-22ff4523cf5e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 7100 chars]
	I0127 12:37:01.266670    9948 system_pods.go:105] "coredns-668d6bf9bc-2qw6w" [8f0367fc-d842-4cc3-8e71-30869a548243] Running
	I0127 12:37:01.266670    9948 system_pods.go:105] "etcd-multinode-659000" [4c33fa42-51a7-4a7a-a497-cce80b8773d6] Running
	I0127 12:37:01.266716    9948 system_pods.go:105] "kindnet-kpfjt" [b00e6ead-b072-40b5-9c87-7697316d8107] Running
	I0127 12:37:01.266716    9948 system_pods.go:105] "kindnet-n7vjl" [23617db6-b970-4ead-845b-69776d50ffef] Running
	I0127 12:37:01.266716    9948 system_pods.go:105] "kindnet-z2hqq" [9b617a9c-e2b8-45fd-bee2-45cb03d4cd42] Running
	I0127 12:37:01.266744    9948 system_pods.go:105] "kube-apiserver-multinode-659000" [8fbee94f-fd8f-4431-bd9f-b75d49cb19d4] Running
	I0127 12:37:01.266744    9948 system_pods.go:105] "kube-controller-manager-multinode-659000" [8be02f36-161c-44f3-b526-56db3b8a007a] Running
	I0127 12:37:01.266744    9948 system_pods.go:105] "kube-proxy-pjhc8" [ddb6698c-b83d-4a49-9672-c894e87cbb66] Running
	I0127 12:37:01.266744    9948 system_pods.go:105] "kube-proxy-s46mv" [ae3b8daf-d674-4cfe-8652-cb5ff6ba8615] Running
	I0127 12:37:01.266744    9948 system_pods.go:105] "kube-proxy-sk5js" [ba679e1d-713c-4bd4-b267-2b887c1ac4df] Running
	I0127 12:37:01.266790    9948 system_pods.go:105] "kube-scheduler-multinode-659000" [52b91964-a331-4925-9e07-c8df32b4176d] Running
	I0127 12:37:01.266820    9948 system_pods.go:105] "storage-provisioner" [bcfd7913-1bc0-4c24-882f-2be92ec9b046] Running
	I0127 12:37:01.266820    9948 system_pods.go:147] duration metric: took 18.3916ms to wait for k8s-apps to be running ...
	I0127 12:37:01.266820    9948 system_svc.go:44] waiting for kubelet service to be running ....
	I0127 12:37:01.280154    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:37:01.306558    9948 system_svc.go:56] duration metric: took 39.7375ms WaitForService to wait for kubelet
	I0127 12:37:01.306558    9948 kubeadm.go:582] duration metric: took 1m14.3271236s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0127 12:37:01.306558    9948 node_conditions.go:102] verifying NodePressure condition ...
	I0127 12:37:01.306558    9948 round_trippers.go:463] GET https://172.29.198.106:8443/api/v1/nodes
	I0127 12:37:01.306558    9948 round_trippers.go:469] Request Headers:
	I0127 12:37:01.306558    9948 round_trippers.go:473]     Accept: application/json, */*
	I0127 12:37:01.306558    9948 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0127 12:37:01.312166    9948 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0127 12:37:01.312166    9948 round_trippers.go:577] Response Headers:
	I0127 12:37:01.312166    9948 round_trippers.go:580]     Content-Type: application/json
	I0127 12:37:01.312166    9948 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c318c9e2-e9ed-4fdb-8297-a0d67bf8294c
	I0127 12:37:01.312166    9948 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: be31cfc1-a27d-40a4-92eb-255714bebaf6
	I0127 12:37:01.312166    9948 round_trippers.go:580]     Date: Mon, 27 Jan 2025 12:37:01 GMT
	I0127 12:37:01.312166    9948 round_trippers.go:580]     Audit-Id: 6d477784-cfda-4482-9fcd-64a22c4afb4e
	I0127 12:37:01.312296    9948 round_trippers.go:580]     Cache-Control: no-cache, private
	I0127 12:37:01.312578    9948 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2041"},"items":[{"metadata":{"name":"multinode-659000","uid":"ac8b4b5f-6620-484c-8fb9-870894acc2c4","resourceVersion":"1990","creationTimestamp":"2025-01-27T12:11:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-659000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"21d19df81a8d69cdaec1a8f1932c09dc00369650","minikube.k8s.io/name":"multinode-659000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2025_01_27T12_12_00_0700","minikube.k8s.io/version":"v1.35.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16260 chars]
	I0127 12:37:01.313729    9948 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:37:01.313756    9948 node_conditions.go:123] node cpu capacity is 2
	I0127 12:37:01.313756    9948 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:37:01.313823    9948 node_conditions.go:123] node cpu capacity is 2
	I0127 12:37:01.313823    9948 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0127 12:37:01.313823    9948 node_conditions.go:123] node cpu capacity is 2
	I0127 12:37:01.313852    9948 node_conditions.go:105] duration metric: took 7.2648ms to run NodePressure ...
	I0127 12:37:01.313852    9948 start.go:241] waiting for startup goroutines ...
	I0127 12:37:01.313852    9948 start.go:246] waiting for cluster config update ...
	I0127 12:37:01.313852    9948 start.go:255] writing updated cluster config ...
	I0127 12:37:01.317889    9948 out.go:201] 
	I0127 12:37:01.321567    9948 config.go:182] Loaded profile config "ha-011400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:37:01.338316    9948 config.go:182] Loaded profile config "multinode-659000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:37:01.338535    9948 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\config.json ...
	I0127 12:37:01.344930    9948 out.go:177] * Starting "multinode-659000-m02" worker node in "multinode-659000" cluster
	I0127 12:37:01.347220    9948 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
	I0127 12:37:01.347389    9948 cache.go:56] Caching tarball of preloaded images
	I0127 12:37:01.347389    9948 preload.go:172] Found C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0127 12:37:01.347389    9948 cache.go:59] Finished verifying existence of preloaded tar for v1.32.1 on docker
	I0127 12:37:01.347389    9948 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\config.json ...
	I0127 12:37:01.350997    9948 start.go:360] acquireMachinesLock for multinode-659000-m02: {Name:mke1d3e045ff2a4f8d2978e08dff146c93a87110 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0127 12:37:01.351226    9948 start.go:364] duration metric: took 179.4µs to acquireMachinesLock for "multinode-659000-m02"
	I0127 12:37:01.351449    9948 start.go:96] Skipping create...Using existing machine configuration
	I0127 12:37:01.351449    9948 fix.go:54] fixHost starting: m02
	I0127 12:37:01.351580    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:37:03.479216    9948 main.go:141] libmachine: [stdout =====>] : Off
	
	I0127 12:37:03.479216    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:03.479334    9948 fix.go:112] recreateIfNeeded on multinode-659000-m02: state=Stopped err=<nil>
	W0127 12:37:03.479334    9948 fix.go:138] unexpected machine state, will restart: <nil>
	I0127 12:37:03.484947    9948 out.go:177] * Restarting existing hyperv VM for "multinode-659000-m02" ...
	I0127 12:37:03.488036    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-659000-m02
	I0127 12:37:06.597718    9948 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:37:06.598392    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:06.598392    9948 main.go:141] libmachine: Waiting for host to start...
	I0127 12:37:06.598473    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:37:08.996533    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:37:08.996533    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:08.996845    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:37:11.538823    9948 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:37:11.538823    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:12.539348    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:37:14.775891    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:37:14.775891    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:14.775891    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:37:17.301803    9948 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:37:17.302472    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:18.302890    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:37:20.462482    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:37:20.463226    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:20.463292    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:37:22.951261    9948 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:37:22.951261    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:23.951341    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:37:26.166650    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:37:26.167547    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:26.167547    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:37:28.727343    9948 main.go:141] libmachine: [stdout =====>] : 
	I0127 12:37:28.727382    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:29.728069    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:37:31.989341    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:37:31.990155    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:31.990229    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:37:34.561762    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:37:34.561762    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:34.565772    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:37:36.707428    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:37:36.707428    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:36.707428    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:37:39.369556    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:37:39.369556    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:39.369556    9948 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\multinode-659000\config.json ...
	I0127 12:37:39.374611    9948 machine.go:93] provisionDockerMachine start ...
	I0127 12:37:39.374611    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:37:41.713498    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:37:41.713498    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:41.713859    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:37:44.375957    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:37:44.375957    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:44.381823    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:37:44.381961    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.205.217 22 <nil> <nil>}
	I0127 12:37:44.381961    9948 main.go:141] libmachine: About to run SSH command:
	hostname
	I0127 12:37:44.519223    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0127 12:37:44.519223    9948 buildroot.go:166] provisioning hostname "multinode-659000-m02"
	I0127 12:37:44.519378    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:37:46.737401    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:37:46.737735    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:46.737735    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:37:49.413255    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:37:49.413255    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:49.419714    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:37:49.420455    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.205.217 22 <nil> <nil>}
	I0127 12:37:49.420455    9948 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-659000-m02 && echo "multinode-659000-m02" | sudo tee /etc/hostname
	I0127 12:37:49.586768    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-659000-m02
	
	I0127 12:37:49.586768    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:37:51.773746    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:37:51.773746    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:51.774730    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:37:54.292118    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:37:54.292118    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:54.301229    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:37:54.301229    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.205.217 22 <nil> <nil>}
	I0127 12:37:54.301229    9948 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-659000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-659000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-659000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0127 12:37:54.457982    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 12:37:54.458065    9948 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube6\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube6\minikube-integration\.minikube}
	I0127 12:37:54.458065    9948 buildroot.go:174] setting up certificates
	I0127 12:37:54.458065    9948 provision.go:84] configureAuth start
	I0127 12:37:54.458198    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:37:56.616418    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:37:56.616573    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:56.616731    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:37:59.164609    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:37:59.164609    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:37:59.164609    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:38:01.397284    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:38:01.397528    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:01.397528    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:38:03.969402    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:38:03.969402    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:03.969402    9948 provision.go:143] copyHostCerts
	I0127 12:38:03.970066    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem
	I0127 12:38:03.970066    9948 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem, removing ...
	I0127 12:38:03.970066    9948 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\ca.pem
	I0127 12:38:03.970851    9948 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0127 12:38:03.971760    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem
	I0127 12:38:03.972442    9948 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem, removing ...
	I0127 12:38:03.972442    9948 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cert.pem
	I0127 12:38:03.972442    9948 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0127 12:38:03.973604    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem
	I0127 12:38:03.974202    9948 exec_runner.go:144] found C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem, removing ...
	I0127 12:38:03.974202    9948 exec_runner.go:203] rm: C:\Users\jenkins.minikube6\minikube-integration\.minikube\key.pem
	I0127 12:38:03.974299    9948 exec_runner.go:151] cp: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube6\minikube-integration\.minikube/key.pem (1675 bytes)
	I0127 12:38:03.975577    9948 provision.go:117] generating server cert: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-659000-m02 san=[127.0.0.1 172.29.205.217 localhost minikube multinode-659000-m02]
	I0127 12:38:04.272193    9948 provision.go:177] copyRemoteCerts
	I0127 12:38:04.284343    9948 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0127 12:38:04.284480    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:38:06.398935    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:38:06.398935    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:06.398935    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:38:08.938160    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:38:08.938160    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:08.938160    9948 sshutil.go:53] new ssh client: &{IP:172.29.205.217 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000-m02\id_rsa Username:docker}
	I0127 12:38:09.047327    9948 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7629345s)
	I0127 12:38:09.047471    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0127 12:38:09.047635    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0127 12:38:09.091832    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0127 12:38:09.092400    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0127 12:38:09.135988    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0127 12:38:09.136556    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0127 12:38:09.182090    9948 provision.go:87] duration metric: took 14.7238709s to configureAuth
	I0127 12:38:09.182090    9948 buildroot.go:189] setting minikube options for container-runtime
	I0127 12:38:09.182980    9948 config.go:182] Loaded profile config "multinode-659000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:38:09.183073    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:38:11.290925    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:38:11.291092    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:11.291227    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:38:13.823814    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:38:13.824910    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:13.830209    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:38:13.830793    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.205.217 22 <nil> <nil>}
	I0127 12:38:13.830793    9948 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0127 12:38:13.961510    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0127 12:38:13.961510    9948 buildroot.go:70] root file system type: tmpfs
	I0127 12:38:13.961754    9948 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0127 12:38:13.961754    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:38:16.107691    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:38:16.108080    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:16.108080    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:38:18.643315    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:38:18.643315    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:18.650637    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:38:18.650637    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.205.217 22 <nil> <nil>}
	I0127 12:38:18.651300    9948 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.29.198.106"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0127 12:38:18.797236    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.29.198.106
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0127 12:38:18.797785    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:38:20.929373    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:38:20.929373    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:20.930095    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:38:23.476272    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:38:23.476272    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:23.481591    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:38:23.481701    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.205.217 22 <nil> <nil>}
	I0127 12:38:23.481701    9948 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0127 12:38:25.865024    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0127 12:38:25.865063    9948 machine.go:96] duration metric: took 46.4899639s to provisionDockerMachine
	I0127 12:38:25.865063    9948 start.go:293] postStartSetup for "multinode-659000-m02" (driver="hyperv")
	I0127 12:38:25.865063    9948 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0127 12:38:25.877709    9948 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0127 12:38:25.877709    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:38:27.997441    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:38:27.997441    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:27.997944    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:38:30.548758    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:38:30.548986    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:30.549171    9948 sshutil.go:53] new ssh client: &{IP:172.29.205.217 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000-m02\id_rsa Username:docker}
	I0127 12:38:30.648489    9948 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.7707305s)
	I0127 12:38:30.661598    9948 ssh_runner.go:195] Run: cat /etc/os-release
	I0127 12:38:30.667383    9948 command_runner.go:130] > NAME=Buildroot
	I0127 12:38:30.667383    9948 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0127 12:38:30.667383    9948 command_runner.go:130] > ID=buildroot
	I0127 12:38:30.667383    9948 command_runner.go:130] > VERSION_ID=2023.02.9
	I0127 12:38:30.667383    9948 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0127 12:38:30.667383    9948 info.go:137] Remote host: Buildroot 2023.02.9
	I0127 12:38:30.667383    9948 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\addons for local assets ...
	I0127 12:38:30.668914    9948 filesync.go:126] Scanning C:\Users\jenkins.minikube6\minikube-integration\.minikube\files for local assets ...
	I0127 12:38:30.669660    9948 filesync.go:149] local asset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> 59562.pem in /etc/ssl/certs
	I0127 12:38:30.669660    9948 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem -> /etc/ssl/certs/59562.pem
	I0127 12:38:30.680271    9948 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0127 12:38:30.702550    9948 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\ssl\certs\59562.pem --> /etc/ssl/certs/59562.pem (1708 bytes)
	I0127 12:38:30.754344    9948 start.go:296] duration metric: took 4.8892295s for postStartSetup
	I0127 12:38:30.754459    9948 fix.go:56] duration metric: took 1m29.402072s for fixHost
	I0127 12:38:30.754615    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:38:32.911209    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:38:32.911209    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:32.912200    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:38:35.470420    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:38:35.470420    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:35.475800    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:38:35.476512    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.205.217 22 <nil> <nil>}
	I0127 12:38:35.476512    9948 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0127 12:38:35.610220    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: 1737981515.621558141
	
	I0127 12:38:35.610355    9948 fix.go:216] guest clock: 1737981515.621558141
	I0127 12:38:35.610355    9948 fix.go:229] Guest: 2025-01-27 12:38:35.621558141 +0000 UTC Remote: 2025-01-27 12:38:30.7545355 +0000 UTC m=+294.660634101 (delta=4.867022641s)
	I0127 12:38:35.610473    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:38:37.767540    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:38:37.768644    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:37.768726    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:38:40.287970    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:38:40.288485    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:40.294123    9948 main.go:141] libmachine: Using SSH client type: native
	I0127 12:38:40.294123    9948 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.205.217 22 <nil> <nil>}
	I0127 12:38:40.294667    9948 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1737981515
	I0127 12:38:40.430345    9948 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Jan 27 12:38:35 UTC 2025
	
	I0127 12:38:40.430345    9948 fix.go:236] clock set: Mon Jan 27 12:38:35 UTC 2025
	 (err=<nil>)
	I0127 12:38:40.430345    9948 start.go:83] releasing machines lock for "multinode-659000-m02", held for 1m39.0780099s
	I0127 12:38:40.430345    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:38:42.591115    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:38:42.591115    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:42.591115    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:38:45.140199    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:38:45.140878    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:45.146078    9948 out.go:177] * Found network options:
	I0127 12:38:45.149047    9948 out.go:177]   - NO_PROXY=172.29.198.106
	W0127 12:38:45.151690    9948 proxy.go:119] fail to check proxy env: Error ip not in block
	I0127 12:38:45.153974    9948 out.go:177]   - NO_PROXY=172.29.198.106
	W0127 12:38:45.156167    9948 proxy.go:119] fail to check proxy env: Error ip not in block
	W0127 12:38:45.157748    9948 proxy.go:119] fail to check proxy env: Error ip not in block
	I0127 12:38:45.159018    9948 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0127 12:38:45.160039    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:38:45.168117    9948 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0127 12:38:45.169133    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:38:47.364339    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:38:47.364443    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:47.364443    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:38:47.416800    9948 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:38:47.416887    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:47.416967    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:38:50.043339    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:38:50.043401    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:50.044030    9948 sshutil.go:53] new ssh client: &{IP:172.29.205.217 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000-m02\id_rsa Username:docker}
	I0127 12:38:50.101920    9948 main.go:141] libmachine: [stdout =====>] : 172.29.205.217
	
	I0127 12:38:50.101920    9948 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:38:50.103390    9948 sshutil.go:53] new ssh client: &{IP:172.29.205.217 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000-m02\id_rsa Username:docker}
	I0127 12:38:50.158952    9948 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0127 12:38:50.159009    9948 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.9908394s)
	W0127 12:38:50.159009    9948 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0127 12:38:50.170758    9948 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0127 12:38:50.175167    9948 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0127 12:38:50.175716    9948 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0166459s)
	W0127 12:38:50.175716    9948 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0127 12:38:50.206835    9948 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0127 12:38:50.206835    9948 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0127 12:38:50.206835    9948 start.go:495] detecting cgroup driver to use...
	I0127 12:38:50.206835    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:38:50.240717    9948 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0127 12:38:50.253417    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0127 12:38:50.284893    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	W0127 12:38:50.292860    9948 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0127 12:38:50.292860    9948 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0127 12:38:50.309268    9948 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0127 12:38:50.319809    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0127 12:38:50.355076    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:38:50.384436    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0127 12:38:50.415801    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0127 12:38:50.448665    9948 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0127 12:38:50.483387    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0127 12:38:50.514794    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0127 12:38:50.545169    9948 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0127 12:38:50.574956    9948 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0127 12:38:50.593431    9948 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:38:50.593955    9948 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0127 12:38:50.605551    9948 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0127 12:38:50.645521    9948 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0127 12:38:50.673465    9948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:38:50.887181    9948 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0127 12:38:50.919208    9948 start.go:495] detecting cgroup driver to use...
	I0127 12:38:50.932391    9948 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0127 12:38:50.956678    9948 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0127 12:38:50.956777    9948 command_runner.go:130] > [Unit]
	I0127 12:38:50.956777    9948 command_runner.go:130] > Description=Docker Application Container Engine
	I0127 12:38:50.956777    9948 command_runner.go:130] > Documentation=https://docs.docker.com
	I0127 12:38:50.956777    9948 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0127 12:38:50.956945    9948 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0127 12:38:50.957029    9948 command_runner.go:130] > StartLimitBurst=3
	I0127 12:38:50.957067    9948 command_runner.go:130] > StartLimitIntervalSec=60
	I0127 12:38:50.957067    9948 command_runner.go:130] > [Service]
	I0127 12:38:50.957067    9948 command_runner.go:130] > Type=notify
	I0127 12:38:50.957067    9948 command_runner.go:130] > Restart=on-failure
	I0127 12:38:50.957067    9948 command_runner.go:130] > Environment=NO_PROXY=172.29.198.106
	I0127 12:38:50.957067    9948 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0127 12:38:50.957067    9948 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0127 12:38:50.957067    9948 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0127 12:38:50.957067    9948 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0127 12:38:50.957067    9948 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0127 12:38:50.957067    9948 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0127 12:38:50.957067    9948 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0127 12:38:50.957067    9948 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0127 12:38:50.957067    9948 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0127 12:38:50.957067    9948 command_runner.go:130] > ExecStart=
	I0127 12:38:50.957067    9948 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0127 12:38:50.957067    9948 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0127 12:38:50.957067    9948 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0127 12:38:50.957067    9948 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0127 12:38:50.957067    9948 command_runner.go:130] > LimitNOFILE=infinity
	I0127 12:38:50.957067    9948 command_runner.go:130] > LimitNPROC=infinity
	I0127 12:38:50.957067    9948 command_runner.go:130] > LimitCORE=infinity
	I0127 12:38:50.957067    9948 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0127 12:38:50.957067    9948 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0127 12:38:50.957067    9948 command_runner.go:130] > TasksMax=infinity
	I0127 12:38:50.957067    9948 command_runner.go:130] > TimeoutStartSec=0
	I0127 12:38:50.957067    9948 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0127 12:38:50.957067    9948 command_runner.go:130] > Delegate=yes
	I0127 12:38:50.957067    9948 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0127 12:38:50.957067    9948 command_runner.go:130] > KillMode=process
	I0127 12:38:50.957633    9948 command_runner.go:130] > [Install]
	I0127 12:38:50.957633    9948 command_runner.go:130] > WantedBy=multi-user.target
	I0127 12:38:50.971827    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 12:38:51.002521    9948 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0127 12:38:51.038807    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0127 12:38:51.077125    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 12:38:51.114316    9948 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0127 12:38:51.182797    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0127 12:38:51.206990    9948 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0127 12:38:51.241224    9948 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0127 12:38:51.257584    9948 ssh_runner.go:195] Run: which cri-dockerd
	I0127 12:38:51.264066    9948 command_runner.go:130] > /usr/bin/cri-dockerd
	I0127 12:38:51.274320    9948 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0127 12:38:51.293277    9948 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0127 12:38:51.334990    9948 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0127 12:38:51.546606    9948 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0127 12:38:51.735800    9948 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0127 12:38:51.735800    9948 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0127 12:38:51.784327    9948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:38:51.995576    9948 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0127 12:38:54.710260    9948 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.7146552s)
	I0127 12:38:54.722678    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0127 12:38:54.759442    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0127 12:38:54.798264    9948 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0127 12:38:55.003157    9948 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0127 12:38:55.224965    9948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:38:55.426670    9948 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0127 12:38:55.467158    9948 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0127 12:38:55.502305    9948 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0127 12:38:55.692077    9948 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0127 12:38:55.806274    9948 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0127 12:38:55.819446    9948 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0127 12:38:55.829805    9948 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0127 12:38:55.830810    9948 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0127 12:38:55.830810    9948 command_runner.go:130] > Device: 0,22	Inode: 851         Links: 1
	I0127 12:38:55.830810    9948 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0127 12:38:55.830810    9948 command_runner.go:130] > Access: 2025-01-27 12:38:55.727897412 +0000
	I0127 12:38:55.830810    9948 command_runner.go:130] > Modify: 2025-01-27 12:38:55.727897412 +0000
	I0127 12:38:55.830810    9948 command_runner.go:130] > Change: 2025-01-27 12:38:55.731897417 +0000
	I0127 12:38:55.830810    9948 command_runner.go:130] >  Birth: -
	I0127 12:38:55.831138    9948 start.go:563] Will wait 60s for crictl version
	I0127 12:38:55.841369    9948 ssh_runner.go:195] Run: which crictl
	I0127 12:38:55.847897    9948 command_runner.go:130] > /usr/bin/crictl
	I0127 12:38:55.858852    9948 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0127 12:38:55.915128    9948 command_runner.go:130] > Version:  0.1.0
	I0127 12:38:55.915128    9948 command_runner.go:130] > RuntimeName:  docker
	I0127 12:38:55.915221    9948 command_runner.go:130] > RuntimeVersion:  27.4.0
	I0127 12:38:55.915221    9948 command_runner.go:130] > RuntimeApiVersion:  v1
	I0127 12:38:55.915221    9948 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0127 12:38:55.924283    9948 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 12:38:55.966103    9948 command_runner.go:130] > 27.4.0
	I0127 12:38:55.976145    9948 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0127 12:38:56.015095    9948 command_runner.go:130] > 27.4.0
	I0127 12:38:56.021045    9948 out.go:235] * Preparing Kubernetes v1.32.1 on Docker 27.4.0 ...
	I0127 12:38:56.023552    9948 out.go:177]   - env NO_PROXY=172.29.198.106
	I0127 12:38:56.025630    9948 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0127 12:38:56.029967    9948 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0127 12:38:56.030978    9948 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0127 12:38:56.030978    9948 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0127 12:38:56.030978    9948 ip.go:211] Found interface: {Index:17 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:43:05:a6 Flags:up|broadcast|multicast|running}
	I0127 12:38:56.033049    9948 ip.go:214] interface addr: fe80::8ceb:a58b:811a:7c79/64
	I0127 12:38:56.033049    9948 ip.go:214] interface addr: 172.29.192.1/20
	I0127 12:38:56.050374    9948 ssh_runner.go:195] Run: grep 172.29.192.1	host.minikube.internal$ /etc/hosts
	I0127 12:38:56.057348    9948 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.29.192.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0127 12:38:56.081175    9948 mustload.go:65] Loading cluster: multinode-659000
	I0127 12:38:56.082007    9948 config.go:182] Loaded profile config "multinode-659000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:38:56.082324    9948 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	
	
	==> Docker <==
	Jan 27 12:36:14 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:14.113811702Z" level=warning msg="cleaning up after shim disconnected" id=9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f namespace=moby
	Jan 27 12:36:14 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:14.113825002Z" level=info msg="cleaning up dead shim" namespace=moby
	Jan 27 12:36:14 multinode-659000 dockerd[1103]: time="2025-01-27T12:36:14.115914814Z" level=info msg="ignoring event" container=9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jan 27 12:36:27 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:27.602318882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 27 12:36:27 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:27.604079090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 27 12:36:27 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:27.604098490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 12:36:27 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:27.604656892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.795612113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.795786714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.796654617Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.796995818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.861006350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.861082751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.861094651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 12:36:46 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:46.861334452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 12:36:46 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:36:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6b22dbb5ef3e0d283203499fffad001c9c20c643564a55e5bfa5d6352f80e178/resolv.conf as [nameserver 172.29.192.1]"
	Jan 27 12:36:47 multinode-659000 cri-dockerd[1384]: time="2025-01-27T12:36:47Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ef504f99724cba01531b3894329439ae069a4ccac272e31bfac333cc24e62c53/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.321502068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.321825070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.321903471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.322491776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.384958874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.385201176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.385326577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jan 27 12:36:47 multinode-659000 dockerd[1109]: time="2025-01-27T12:36:47.385735080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	528243cca8bfb       8c811b4aec35f                                                                                         2 minutes ago       Running             busybox                   1                   ef504f99724cb       busybox-58667487b6-2jq9j
	b3a9ed6e130c0       c69fa2e9cbf5f                                                                                         2 minutes ago       Running             coredns                   1                   6b22dbb5ef3e0       coredns-668d6bf9bc-2qw6w
	389606c183b19       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       2                   b613e9a7a3565       storage-provisioner
	373bec67270fb       50415e5d05f05                                                                                         3 minutes ago       Running             kindnet-cni               1                   d43e4cc62e087       kindnet-z2hqq
	9b2db1d0cb61c       6e38f40d628db                                                                                         3 minutes ago       Exited              storage-provisioner       1                   b613e9a7a3565       storage-provisioner
	0283b35dee3cc       e29f9c7391fd9                                                                                         3 minutes ago       Running             kube-proxy                1                   34d579bb511fe       kube-proxy-s46mv
	ea993630a3109       95c0bda56fc4d                                                                                         3 minutes ago       Running             kube-apiserver            0                   5601285bb260a       kube-apiserver-multinode-659000
	0ef2a3b50bae8       a9e7e6b294baf                                                                                         3 minutes ago       Running             etcd                      0                   cdf534e99b2bb       etcd-multinode-659000
	ed51c7eaa9666       2b0d6572d062c                                                                                         3 minutes ago       Running             kube-scheduler            1                   910315897d842       kube-scheduler-multinode-659000
	8d4872cda28de       019ee182b58e2                                                                                         3 minutes ago       Running             kube-controller-manager   1                   b770a357d9830       kube-controller-manager-multinode-659000
	998a64b2baa2d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   4c82c0ec4aeaa       busybox-58667487b6-2jq9j
	f818dd15d8b02       c69fa2e9cbf5f                                                                                         27 minutes ago      Exited              coredns                   0                   4a53e133a1cd6       coredns-668d6bf9bc-2qw6w
	d758000dda95d       kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108              27 minutes ago      Exited              kindnet-cni               0                   f2d0bd65fe50d       kindnet-z2hqq
	bbec7ccef7da5       e29f9c7391fd9                                                                                         27 minutes ago      Exited              kube-proxy                0                   319cddeebceb6       kube-proxy-s46mv
	a16e06a038601       2b0d6572d062c                                                                                         27 minutes ago      Exited              kube-scheduler            0                   5423fc5113290       kube-scheduler-multinode-659000
	e07a66f8f6196       019ee182b58e2                                                                                         27 minutes ago      Exited              kube-controller-manager   0                   1bd5bf99bede3       kube-controller-manager-multinode-659000
	
	
	==> coredns [b3a9ed6e130c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 5e2e325279dfa828a8fd1b44d83ab4703abb0247d4beadde42157147650fe687c0862eaa4caa15a5d9139c48c9a9dd5ec3cd962ba60368e8ffb4d02ae4d29aeb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:47464 - 34099 "HINFO IN 5313391549706874198.1206200090770907475. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.062040871s
	
	
	==> coredns [f818dd15d8b0] <==
	[INFO] 10.244.1.2:50877 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000188102s
	[INFO] 10.244.1.2:45384 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000183802s
	[INFO] 10.244.1.2:35073 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000227202s
	[INFO] 10.244.1.2:50517 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000061101s
	[INFO] 10.244.1.2:37353 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130501s
	[INFO] 10.244.1.2:42117 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114301s
	[INFO] 10.244.1.2:46171 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060401s
	[INFO] 10.244.0.3:55282 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117601s
	[INFO] 10.244.0.3:41761 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162301s
	[INFO] 10.244.0.3:35358 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000218902s
	[INFO] 10.244.0.3:50342 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000124402s
	[INFO] 10.244.1.2:38159 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159602s
	[INFO] 10.244.1.2:37043 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000171002s
	[INFO] 10.244.1.2:50762 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000168301s
	[INFO] 10.244.1.2:33014 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000603s
	[INFO] 10.244.0.3:34941 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134301s
	[INFO] 10.244.0.3:60117 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000393904s
	[INFO] 10.244.0.3:47506 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000214402s
	[INFO] 10.244.0.3:42968 - 5 "PTR IN 1.192.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000443604s
	[INFO] 10.244.1.2:52260 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193802s
	[INFO] 10.244.1.2:40492 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000310903s
	[INFO] 10.244.1.2:50341 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000074s
	[INFO] 10.244.1.2:41676 - 5 "PTR IN 1.192.29.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0000637s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-659000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-659000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	                    minikube.k8s.io/name=multinode-659000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_01_27T12_12_00_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 12:11:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-659000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 12:39:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Jan 2025 12:36:32 +0000   Mon, 27 Jan 2025 12:11:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Jan 2025 12:36:32 +0000   Mon, 27 Jan 2025 12:11:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Jan 2025 12:36:32 +0000   Mon, 27 Jan 2025 12:11:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Jan 2025 12:36:32 +0000   Mon, 27 Jan 2025 12:36:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.29.198.106
	  Hostname:    multinode-659000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 312902fc96b948148d51eecf097c4a9d
	  System UUID:                be6234aa-9e29-bb41-8165-59b265a4d7d0
	  Boot ID:                    058425a5-0652-4c5c-a517-2369b8cac13d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-2jq9j                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 coredns-668d6bf9bc-2qw6w                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     27m
	  kube-system                 etcd-multinode-659000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         3m49s
	  kube-system                 kindnet-z2hqq                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      27m
	  kube-system                 kube-apiserver-multinode-659000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 kube-controller-manager-multinode-659000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-proxy-s46mv                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-scheduler-multinode-659000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 27m                    kube-proxy       
	  Normal   Starting                 3m46s                  kube-proxy       
	  Normal   Starting                 27m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  27m (x8 over 27m)      kubelet          Node multinode-659000 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    27m (x8 over 27m)      kubelet          Node multinode-659000 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     27m (x7 over 27m)      kubelet          Node multinode-659000 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    27m                    kubelet          Node multinode-659000 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  27m                    kubelet          Node multinode-659000 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     27m                    kubelet          Node multinode-659000 status is now: NodeHasSufficientPID
	  Normal   Starting                 27m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           27m                    node-controller  Node multinode-659000 event: Registered Node multinode-659000 in Controller
	  Normal   NodeReady                27m                    kubelet          Node multinode-659000 status is now: NodeReady
	  Normal   Starting                 3m55s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  3m55s (x8 over 3m55s)  kubelet          Node multinode-659000 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m55s (x8 over 3m55s)  kubelet          Node multinode-659000 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m55s (x7 over 3m55s)  kubelet          Node multinode-659000 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  3m55s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 3m50s                  kubelet          Node multinode-659000 has been rebooted, boot id: 058425a5-0652-4c5c-a517-2369b8cac13d
	  Normal   RegisteredNode           3m47s                  node-controller  Node multinode-659000 event: Registered Node multinode-659000 in Controller
	
	
	Name:               multinode-659000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-659000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	                    minikube.k8s.io/name=multinode-659000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_01_27T12_15_08_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 12:15:07 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-659000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 12:32:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 27 Jan 2025 12:28:32 +0000   Mon, 27 Jan 2025 12:36:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 27 Jan 2025 12:28:32 +0000   Mon, 27 Jan 2025 12:36:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 27 Jan 2025 12:28:32 +0000   Mon, 27 Jan 2025 12:36:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 27 Jan 2025 12:28:32 +0000   Mon, 27 Jan 2025 12:36:34 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.29.199.129
	  Hostname:    multinode-659000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 30ce15ff72904b54b07c49f3e2f28802
	  System UUID:                b6923799-fa1e-b54c-9340-50dd6a2378f5
	  Boot ID:                    3308d183-ec79-4aeb-9d90-80d47cdbff63
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-ktfxc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kindnet-n7vjl               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	  kube-system                 kube-proxy-pjhc8            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24m                kube-proxy       
	  Normal  NodeHasSufficientMemory  24m (x2 over 24m)  kubelet          Node multinode-659000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x2 over 24m)  kubelet          Node multinode-659000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x2 over 24m)  kubelet          Node multinode-659000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           24m                node-controller  Node multinode-659000-m02 event: Registered Node multinode-659000-m02 in Controller
	  Normal  NodeReady                23m                kubelet          Node multinode-659000-m02 status is now: NodeReady
	  Normal  RegisteredNode           3m47s              node-controller  Node multinode-659000-m02 event: Registered Node multinode-659000-m02 in Controller
	  Normal  NodeNotReady             2m57s              node-controller  Node multinode-659000-m02 status is now: NodeNotReady
	
	
	Name:               multinode-659000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-659000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=21d19df81a8d69cdaec1a8f1932c09dc00369650
	                    minikube.k8s.io/name=multinode-659000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_01_27T12_31_04_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Jan 2025 12:31:04 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-659000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Jan 2025 12:32:15 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 27 Jan 2025 12:31:22 +0000   Mon, 27 Jan 2025 12:33:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 27 Jan 2025 12:31:22 +0000   Mon, 27 Jan 2025 12:33:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 27 Jan 2025 12:31:22 +0000   Mon, 27 Jan 2025 12:33:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 27 Jan 2025 12:31:22 +0000   Mon, 27 Jan 2025 12:33:08 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.29.206.88
	  Hostname:    multinode-659000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 5cd7b7bdbad940e0831e949f70fdd5af
	  System UUID:                bab0a90b-9ed8-ba42-88b9-fc6568ad7a53
	  Boot ID:                    9d0d04c8-71ef-487a-a13c-e1de6463b3fe
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.1
	  Kube-Proxy Version:         v1.32.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-kpfjt       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      19m
	  kube-system                 kube-proxy-sk5js    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 19m                    kube-proxy       
	  Normal  Starting                 8m23s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  19m (x2 over 19m)      kubelet          Node multinode-659000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     19m (x2 over 19m)      kubelet          Node multinode-659000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    19m (x2 over 19m)      kubelet          Node multinode-659000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                19m                    kubelet          Node multinode-659000-m03 status is now: NodeReady
	  Normal  Starting                 8m28s                  kubelet          Starting kubelet.
	  Normal  CIDRAssignmentFailed     8m27s                  cidrAllocator    Node multinode-659000-m03 status is now: CIDRAssignmentFailed
	  Normal  NodeHasSufficientMemory  8m27s (x2 over 8m27s)  kubelet          Node multinode-659000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m27s (x2 over 8m27s)  kubelet          Node multinode-659000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m27s (x2 over 8m27s)  kubelet          Node multinode-659000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m23s                  node-controller  Node multinode-659000-m03 event: Registered Node multinode-659000-m03 in Controller
	  Normal  NodeReady                8m9s                   kubelet          Node multinode-659000-m03 status is now: NodeReady
	  Normal  NodeNotReady             6m23s                  node-controller  Node multinode-659000-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           3m47s                  node-controller  Node multinode-659000-m03 event: Registered Node multinode-659000-m03 in Controller
	
	
	==> dmesg <==
	[Jan27 12:34] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.706235] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.791193] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +6.780102] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan27 12:35] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.194598] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[ +25.881577] systemd-fstab-generator[1029]: Ignoring "noauto" option for root device
	[  +0.104839] kauditd_printk_skb: 75 callbacks suppressed
	[  +0.497850] systemd-fstab-generator[1069]: Ignoring "noauto" option for root device
	[  +0.189754] systemd-fstab-generator[1081]: Ignoring "noauto" option for root device
	[  +0.209865] systemd-fstab-generator[1095]: Ignoring "noauto" option for root device
	[  +2.995294] systemd-fstab-generator[1337]: Ignoring "noauto" option for root device
	[  +0.193187] systemd-fstab-generator[1349]: Ignoring "noauto" option for root device
	[  +0.167597] systemd-fstab-generator[1361]: Ignoring "noauto" option for root device
	[  +0.247752] systemd-fstab-generator[1376]: Ignoring "noauto" option for root device
	[  +0.858687] systemd-fstab-generator[1500]: Ignoring "noauto" option for root device
	[  +0.090112] kauditd_printk_skb: 206 callbacks suppressed
	[  +3.380441] systemd-fstab-generator[1641]: Ignoring "noauto" option for root device
	[  +1.786352] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.236723] kauditd_printk_skb: 10 callbacks suppressed
	[  +4.105586] systemd-fstab-generator[2522]: Ignoring "noauto" option for root device
	[Jan27 12:36] kauditd_printk_skb: 70 callbacks suppressed
	[ +43.939067] hrtimer: interrupt took 2738729 ns
	
	
	==> etcd [0ef2a3b50bae] <==
	{"level":"info","ts":"2025-01-27T12:35:38.404888Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-01-27T12:35:38.405566Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d020e240c474bd89","local-member-id":"925e6945be3a5b5b","added-peer-id":"925e6945be3a5b5b","added-peer-peer-urls":["https://172.29.204.17:2380"]}
	{"level":"info","ts":"2025-01-27T12:35:38.405716Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d020e240c474bd89","local-member-id":"925e6945be3a5b5b","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T12:35:38.405754Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-01-27T12:35:38.407643Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-01-27T12:35:38.408091Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"925e6945be3a5b5b","initial-advertise-peer-urls":["https://172.29.198.106:2380"],"listen-peer-urls":["https://172.29.198.106:2380"],"advertise-client-urls":["https://172.29.198.106:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.29.198.106:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-01-27T12:35:38.408386Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-01-27T12:35:38.408686Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"172.29.198.106:2380"}
	{"level":"info","ts":"2025-01-27T12:35:38.408809Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"172.29.198.106:2380"}
	{"level":"info","ts":"2025-01-27T12:35:39.355207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b is starting a new election at term 2"}
	{"level":"info","ts":"2025-01-27T12:35:39.355615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b became pre-candidate at term 2"}
	{"level":"info","ts":"2025-01-27T12:35:39.355770Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b received MsgPreVoteResp from 925e6945be3a5b5b at term 2"}
	{"level":"info","ts":"2025-01-27T12:35:39.355926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b became candidate at term 3"}
	{"level":"info","ts":"2025-01-27T12:35:39.356088Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b received MsgVoteResp from 925e6945be3a5b5b at term 3"}
	{"level":"info","ts":"2025-01-27T12:35:39.356235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"925e6945be3a5b5b became leader at term 3"}
	{"level":"info","ts":"2025-01-27T12:35:39.356449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 925e6945be3a5b5b elected leader 925e6945be3a5b5b at term 3"}
	{"level":"info","ts":"2025-01-27T12:35:39.368540Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"925e6945be3a5b5b","local-member-attributes":"{Name:multinode-659000 ClientURLs:[https://172.29.198.106:2379]}","request-path":"/0/members/925e6945be3a5b5b/attributes","cluster-id":"d020e240c474bd89","publish-timeout":"7s"}
	{"level":"info","ts":"2025-01-27T12:35:39.369045Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T12:35:39.371833Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-01-27T12:35:39.372238Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-01-27T12:35:39.374158Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-01-27T12:35:39.383680Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T12:35:39.391404Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-01-27T12:35:39.392982Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.29.198.106:2379"}
	{"level":"info","ts":"2025-01-27T12:35:39.399505Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 12:39:31 up 5 min,  0 users,  load average: 0.39, 0.40, 0.19
	Linux multinode-659000 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [373bec67270f] <==
	I0127 12:38:45.401624       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:38:55.400693       1 main.go:297] Handling node with IPs: map[172.29.198.106:{}]
	I0127 12:38:55.400787       1 main.go:301] handling current node
	I0127 12:38:55.400806       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:38:55.401166       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:38:55.401354       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:38:55.401436       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:39:05.407543       1 main.go:297] Handling node with IPs: map[172.29.198.106:{}]
	I0127 12:39:05.407687       1 main.go:301] handling current node
	I0127 12:39:05.407708       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:39:05.407717       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:39:05.412357       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:39:05.412394       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:39:15.409543       1 main.go:297] Handling node with IPs: map[172.29.198.106:{}]
	I0127 12:39:15.409582       1 main.go:301] handling current node
	I0127 12:39:15.409602       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:39:15.409609       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:39:15.409996       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:39:15.410065       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:39:25.406002       1 main.go:297] Handling node with IPs: map[172.29.198.106:{}]
	I0127 12:39:25.406431       1 main.go:301] handling current node
	I0127 12:39:25.406559       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:39:25.406854       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:39:25.407797       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:39:25.407916       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [d758000dda95] <==
	I0127 12:32:34.854469       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:32:44.853378       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:32:44.853424       1 main.go:301] handling current node
	I0127 12:32:44.853441       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:32:44.853447       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:32:44.853735       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:32:44.853765       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:32:54.859317       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:32:54.859396       1 main.go:301] handling current node
	I0127 12:32:54.859415       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:32:54.859421       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:32:54.859756       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:32:54.859853       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:33:04.861975       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:33:04.862085       1 main.go:301] handling current node
	I0127 12:33:04.862106       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:33:04.862113       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:33:04.862780       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:33:04.862861       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	I0127 12:33:14.853823       1 main.go:297] Handling node with IPs: map[172.29.204.17:{}]
	I0127 12:33:14.853859       1 main.go:301] handling current node
	I0127 12:33:14.853877       1 main.go:297] Handling node with IPs: map[172.29.199.129:{}]
	I0127 12:33:14.853884       1 main.go:324] Node multinode-659000-m02 has CIDR [10.244.1.0/24] 
	I0127 12:33:14.854153       1 main.go:297] Handling node with IPs: map[172.29.206.88:{}]
	I0127 12:33:14.854165       1 main.go:324] Node multinode-659000-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [ea993630a310] <==
	I0127 12:35:41.488750       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0127 12:35:41.488990       1 aggregator.go:171] initial CRD sync complete...
	I0127 12:35:41.489245       1 autoregister_controller.go:144] Starting autoregister controller
	I0127 12:35:41.489480       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0127 12:35:41.489653       1 cache.go:39] Caches are synced for autoregister controller
	I0127 12:35:41.499151       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0127 12:35:41.527390       1 shared_informer.go:320] Caches are synced for configmaps
	I0127 12:35:41.528625       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0127 12:35:41.529892       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0127 12:35:41.530639       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0127 12:35:41.531604       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0127 12:35:41.531638       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0127 12:35:41.534721       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0127 12:35:41.540933       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0127 12:35:41.545944       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0127 12:35:42.357869       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0127 12:35:42.374307       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0127 12:35:43.074223       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.29.198.106]
	I0127 12:35:43.075938       1 controller.go:615] quota admission added evaluator for: endpoints
	I0127 12:35:43.085006       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0127 12:35:44.603084       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0127 12:35:44.989601       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0127 12:35:45.141450       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0127 12:35:45.327075       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0127 12:35:45.338333       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [8d4872cda28d] <==
	I0127 12:35:44.899476       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:35:44.900201       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000"
	I0127 12:35:44.900496       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000-m02"
	I0127 12:35:44.900687       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-659000-m03"
	I0127 12:35:44.901405       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0127 12:35:44.984858       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:35:45.000632       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="180.930208ms"
	I0127 12:35:45.003909       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="39.2µs"
	I0127 12:35:45.016382       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="195.414857ms"
	I0127 12:35:45.016698       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="108.2µs"
	I0127 12:35:54.975850       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:36:32.834093       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:36:32.834425       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:32.855708       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:34.928482       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:34.940809       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:36:34.955742       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:35.025877       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="15.32946ms"
	I0127 12:36:35.026020       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="30.3µs"
	I0127 12:36:40.041357       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m02"
	I0127 12:36:47.580904       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="50.8µs"
	I0127 12:36:48.616631       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="19.328909ms"
	I0127 12:36:48.617909       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="35.8µs"
	I0127 12:36:48.650691       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="23.414753ms"
	I0127 12:36:48.651163       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="28.701µs"
	
	
	==> kube-controller-manager [e07a66f8f619] <==
	I0127 12:29:38.000555       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000"
	I0127 12:30:52.866288       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:30:52.895359       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:30:58.140304       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:31:04.208510       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:31:04.209007       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-659000-m03\" does not exist"
	I0127 12:31:04.238560       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-659000-m03" podCIDRs=["10.244.3.0/24"]
	I0127 12:31:04.238634       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	E0127 12:31:04.255963       1 range_allocator.go:433] "Failed to update node PodCIDR after multiple attempts" err="failed to patch node CIDR: Node \"multinode-659000-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-659000-m03" podCIDRs=["10.244.4.0/24"]
	E0127 12:31:04.256068       1 range_allocator.go:439] "CIDR assignment for node failed. Releasing allocated CIDR" err="failed to patch node CIDR: Node \"multinode-659000-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid]" logger="node-ipam-controller" node="multinode-659000-m03"
	E0127 12:31:04.256109       1 range_allocator.go:252] "Unhandled Error" err="error syncing 'multinode-659000-m03': failed to patch node CIDR: Node \"multinode-659000-m03\" is invalid: [spec.podCIDRs: Invalid value: []string{\"10.244.4.0/24\", \"10.244.3.0/24\"}: may specify no more than one CIDR for each IP family, spec.podCIDRs: Forbidden: node updates may not change podCIDR except from \"\" to valid], requeuing" logger="UnhandledError"
	I0127 12:31:04.256134       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:31:04.261242       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:31:04.513319       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:31:05.081710       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:31:08.523576       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:31:14.394811       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:31:22.407069       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:31:22.407472       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:31:22.419743       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:31:23.498434       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:33:08.544063       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-659000-m02"
	I0127 12:33:08.544656       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:33:08.574301       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	I0127 12:33:13.661256       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-659000-m03"
	
	
	==> kube-proxy [0283b35dee3c] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 12:35:44.599245       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 12:35:44.767652       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.198.106"]
	E0127 12:35:44.770299       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 12:35:45.038438       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 12:35:45.038556       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 12:35:45.038587       1 server_linux.go:170] "Using iptables Proxier"
	I0127 12:35:45.043111       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 12:35:45.045042       1 server.go:497] "Version info" version="v1.32.1"
	I0127 12:35:45.045375       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:35:45.053262       1 config.go:199] "Starting service config controller"
	I0127 12:35:45.054808       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 12:35:45.054873       1 config.go:329] "Starting node config controller"
	I0127 12:35:45.054880       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 12:35:45.058308       1 config.go:105] "Starting endpoint slice config controller"
	I0127 12:35:45.058492       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 12:35:45.155116       1 shared_informer.go:320] Caches are synced for node config
	I0127 12:35:45.155116       1 shared_informer.go:320] Caches are synced for service config
	I0127 12:35:45.159566       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [bbec7ccef7da] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0127 12:12:05.352123       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0127 12:12:05.378799       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.29.204.17"]
	E0127 12:12:05.378872       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0127 12:12:05.470419       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0127 12:12:05.470552       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0127 12:12:05.470596       1 server_linux.go:170] "Using iptables Proxier"
	I0127 12:12:05.475557       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0127 12:12:05.476697       1 server.go:497] "Version info" version="v1.32.1"
	I0127 12:12:05.476717       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:12:05.478788       1 config.go:199] "Starting service config controller"
	I0127 12:12:05.478844       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0127 12:12:05.478916       1 config.go:105] "Starting endpoint slice config controller"
	I0127 12:12:05.479018       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0127 12:12:05.480053       1 config.go:329] "Starting node config controller"
	I0127 12:12:05.480113       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0127 12:12:05.579605       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0127 12:12:05.579669       1 shared_informer.go:320] Caches are synced for service config
	I0127 12:12:05.580463       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a16e06a03860] <==
	W0127 12:11:56.887386       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0127 12:11:56.887549       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:11:56.918642       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0127 12:11:56.919135       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:11:57.039216       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0127 12:11:57.039707       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:11:57.055169       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0127 12:11:57.055233       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0127 12:11:57.106656       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0127 12:11:57.106828       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:11:57.214186       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0127 12:11:57.214290       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:11:57.298150       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0127 12:11:57.298337       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0127 12:11:57.310098       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0127 12:11:57.310312       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0127 12:11:57.312117       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0127 12:11:57.312192       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0127 12:11:57.321525       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0127 12:11:57.321832       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0127 12:11:59.701790       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:33:15.443053       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0127 12:33:15.443143       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0127 12:33:15.452458       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0127 12:33:15.487412       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ed51c7eaa966] <==
	I0127 12:35:39.285954       1 serving.go:386] Generated self-signed cert in-memory
	W0127 12:35:41.361191       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0127 12:35:41.363231       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0127 12:35:41.363467       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0127 12:35:41.363598       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0127 12:35:41.458309       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.1"
	I0127 12:35:41.458594       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0127 12:35:41.465036       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0127 12:35:41.465587       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0127 12:35:41.466480       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0127 12:35:41.466554       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0127 12:35:41.567642       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 27 12:36:26 multinode-659000 kubelet[1648]: E0127 12:36:26.356493    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	Jan 27 12:36:26 multinode-659000 kubelet[1648]: E0127 12:36:26.402364    1648 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Jan 27 12:36:27 multinode-659000 kubelet[1648]: E0127 12:36:27.356407    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	Jan 27 12:36:27 multinode-659000 kubelet[1648]: I0127 12:36:27.357050    1648 scope.go:117] "RemoveContainer" containerID="9b2db1d0cb61cbdc97628de87433c96ccef2f405193b1a5fc67abd37e9d9851f"
	Jan 27 12:36:28 multinode-659000 kubelet[1648]: E0127 12:36:28.356371    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	Jan 27 12:36:29 multinode-659000 kubelet[1648]: E0127 12:36:29.355555    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	Jan 27 12:36:30 multinode-659000 kubelet[1648]: E0127 12:36:30.356227    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-2qw6w" podUID="8f0367fc-d842-4cc3-8e71-30869a548243"
	Jan 27 12:36:31 multinode-659000 kubelet[1648]: E0127 12:36:31.356043    1648 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-2jq9j" podUID="244fa7e9-f6c4-46a7-b61f-8717e13fd270"
	Jan 27 12:36:36 multinode-659000 kubelet[1648]: I0127 12:36:36.363314    1648 scope.go:117] "RemoveContainer" containerID="5f274e5a8851d2aeb5403952c3fba0274fe53614e2e0995d1046693d7e725d5d"
	Jan 27 12:36:36 multinode-659000 kubelet[1648]: E0127 12:36:36.393311    1648 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 12:36:36 multinode-659000 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 12:36:36 multinode-659000 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 12:36:36 multinode-659000 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 12:36:36 multinode-659000 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 12:36:36 multinode-659000 kubelet[1648]: I0127 12:36:36.409087    1648 scope.go:117] "RemoveContainer" containerID="f91e9c2d3ba64a6d34c9bab7c1953b46f4006e0bb493bd1ae993c489cd76e02c"
	Jan 27 12:37:36 multinode-659000 kubelet[1648]: E0127 12:37:36.391770    1648 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 12:37:36 multinode-659000 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 12:37:36 multinode-659000 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 12:37:36 multinode-659000 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 12:37:36 multinode-659000 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 27 12:38:36 multinode-659000 kubelet[1648]: E0127 12:38:36.391553    1648 iptables.go:577] "Could not set up iptables canary" err=<
	Jan 27 12:38:36 multinode-659000 kubelet[1648]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jan 27 12:38:36 multinode-659000 kubelet[1648]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 27 12:38:36 multinode-659000 kubelet[1648]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 27 12:38:36 multinode-659000 kubelet[1648]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-659000 -n multinode-659000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-659000 -n multinode-659000: (12.0173763s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-659000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (471.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (302.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-739200 --driver=hyperv
E0127 12:55:47.494870    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 12:57:04.054473    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-739200 --driver=hyperv: exit status 1 (4m59.5409905s)

                                                
                                                
-- stdout --
	* [NoKubernetes-739200] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-739200" primary control-plane node in "NoKubernetes-739200" cluster
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-739200 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-739200 -n NoKubernetes-739200
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-739200 -n NoKubernetes-739200: exit status 7 (2.817448s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0127 13:00:36.673899   12872 main.go:137] libmachine: [stderr =====>] : Hyper-V\Get-VM : Hyper-V was unable to find a virtual machine with name "NoKubernetes-739200".
	At line:1 char:3
	+ ( Hyper-V\Get-VM NoKubernetes-739200 ).state
	+   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
	    + CategoryInfo          : InvalidArgument: (NoKubernetes-739200:String) [Get-VM], VirtualizationException
	    + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.GetVM
	 
	

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-739200" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (302.36s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (38.76s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-918600 --alsologtostderr -v=5
pause_test.go:132: (dbg) Non-zero exit: out/minikube-windows-amd64.exe delete -p pause-918600 --alsologtostderr -v=5: exit status 1 (32.6627817s)

                                                
                                                
-- stdout --
	* Stopping node "pause-918600"  ...
	* Powering off "pause-918600" via SSH ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 13:27:07.430107    3800 out.go:345] Setting OutFile to fd 1492 ...
	I0127 13:27:07.512800    3800 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:27:07.512800    3800 out.go:358] Setting ErrFile to fd 1928...
	I0127 13:27:07.512800    3800 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 13:27:07.533219    3800 out.go:352] Setting JSON to false
	I0127 13:27:07.542309    3800 cli_runner.go:164] Run: docker ps -a --filter label=name.minikube.sigs.k8s.io --format {{.Names}}
	I0127 13:27:07.632959    3800 config.go:182] Loaded profile config "auto-698800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 13:27:07.633454    3800 config.go:182] Loaded profile config "cert-expiration-934800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 13:27:07.633957    3800 config.go:182] Loaded profile config "ha-011400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 13:27:07.634211    3800 config.go:182] Loaded profile config "kindnet-698800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 13:27:07.634842    3800 config.go:182] Loaded profile config "pause-918600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 13:27:07.635469    3800 config.go:182] Loaded profile config "pause-918600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 13:27:07.635469    3800 delete.go:301] DeleteProfiles
	I0127 13:27:07.635537    3800 delete.go:329] Deleting pause-918600
	I0127 13:27:07.635537    3800 delete.go:334] pause-918600 configuration: &{Name:pause-918600 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.1 ClusterName:pause-918600 Namespace:defau
lt APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.29.207.180 Port:8443 KubernetesVersion:v1.32.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-po
licy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 13:27:07.636404    3800 config.go:182] Loaded profile config "pause-918600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 13:27:07.637081    3800 config.go:182] Loaded profile config "pause-918600": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 13:27:07.639325    3800 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-918600 ).state
	I0127 13:27:10.058473    3800 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 13:27:10.058473    3800 main.go:141] libmachine: [stderr =====>] : 
	I0127 13:27:10.058473    3800 stop.go:39] StopHost: pause-918600
	I0127 13:27:10.063586    3800 out.go:177] * Stopping node "pause-918600"  ...
	I0127 13:27:10.066054    3800 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0127 13:27:10.077296    3800 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0127 13:27:10.077296    3800 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-918600 ).state
	I0127 13:27:12.355363    3800 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 13:27:12.355363    3800 main.go:141] libmachine: [stderr =====>] : 
	I0127 13:27:12.355753    3800 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-918600 ).networkadapters[0]).ipaddresses[0]
	I0127 13:27:15.009453    3800 main.go:141] libmachine: [stdout =====>] : 172.29.207.180
	
	I0127 13:27:15.009543    3800 main.go:141] libmachine: [stderr =====>] : 
	I0127 13:27:15.009967    3800 sshutil.go:53] new ssh client: &{IP:172.29.207.180 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\pause-918600\id_rsa Username:docker}
	I0127 13:27:15.120619    3800 ssh_runner.go:235] Completed: sudo mkdir -p /var/lib/minikube/backup: (5.0432718s)
	I0127 13:27:15.132861    3800 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0127 13:27:15.209683    3800 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0127 13:27:15.275595    3800 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-918600 ).state
	I0127 13:27:17.587464    3800 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 13:27:17.588037    3800 main.go:141] libmachine: [stderr =====>] : 
	W0127 13:27:17.588323    3800 register.go:133] "PowerOff" was not found within the registered steps for "Deleting": [Deleting Stopping Done Puring home dir]
	I0127 13:27:17.682448    3800 out.go:177] * Powering off "pause-918600" via SSH ...
	I0127 13:27:17.686982    3800 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-918600 ).state
	I0127 13:27:19.985376    3800 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 13:27:19.985462    3800 main.go:141] libmachine: [stderr =====>] : 
	I0127 13:27:19.985566    3800 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-918600 ).networkadapters[0]).ipaddresses[0]
	I0127 13:27:22.647881    3800 main.go:141] libmachine: [stdout =====>] : 172.29.207.180
	
	I0127 13:27:22.647881    3800 main.go:141] libmachine: [stderr =====>] : 
	I0127 13:27:22.657396    3800 main.go:141] libmachine: Using SSH client type: native
	I0127 13:27:22.657396    3800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1475360] 0x1477ea0 <nil>  [] 0s} 172.29.207.180 22 <nil> <nil>}
	I0127 13:27:22.658354    3800 main.go:141] libmachine: About to run SSH command:
	sudo poweroff
	I0127 13:27:22.819794    3800 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0127 13:27:22.819929    3800 stop.go:100] poweroff result: out=, err=<nil>
	I0127 13:27:22.819929    3800 main.go:141] libmachine: Stopping "pause-918600"...
	I0127 13:27:22.819929    3800 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-918600 ).state
	I0127 13:27:25.881387    3800 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 13:27:25.881387    3800 main.go:141] libmachine: [stderr =====>] : 
	I0127 13:27:25.881387    3800 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Stop-VM pause-918600

                                                
                                                
** /stderr **
pause_test.go:134: failed to delete minikube with args: "out/minikube-windows-amd64.exe delete -p pause-918600 --alsologtostderr -v=5" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-918600 -n pause-918600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-918600 -n pause-918600: exit status 7 (3.0869631s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-918600" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-918600 -n pause-918600
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-918600 -n pause-918600: exit status 7 (3.0113509s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-918600" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/DeletePaused (38.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10800.507s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-698800 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-5d86dc444-6cthf" [a585879c-80c4-42e2-b0cd-43cdf59faa63] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
panic: test timed out after 3h0m0s
	running tests:
		TestNetworkPlugins (23m51s)
		TestNetworkPlugins/group/auto (8m58s)
		TestNetworkPlugins/group/calico (4m55s)
		TestNetworkPlugins/group/calico/Start (4m55s)
		TestNetworkPlugins/group/custom-flannel (4m20s)
		TestNetworkPlugins/group/custom-flannel/Start (4m20s)
		TestNetworkPlugins/group/kindnet (6m30s)
		TestNetworkPlugins/group/kindnet/NetCatPod (3s)
		TestStartStop (20m20s)

                                                
                                                
goroutine 2447 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2373 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive, 4 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x489
testing.tRunner(0xc0005b41a0, 0xc000b6fbc8)
	/usr/local/go/src/testing/testing.go:1696 +0x104
testing.runTests(0xc000134090, {0x507d320, 0x2a, 0x2a}, {0xffffffffffffffff?, 0x2cd9f9?, 0x50a42c0?})
	/usr/local/go/src/testing/testing.go:2166 +0x43d
testing.(*M).Run(0xc00052c000)
	/usr/local/go/src/testing/testing.go:2034 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc00052c000)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:129 +0xa8

                                                
                                                
goroutine 2308 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc000624780)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc001417860)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc001417860)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001417860)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc001417860, 0xc00171e200)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2306
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2193 [syscall]:
syscall.Syscall(0x10?, 0xc001479170?, 0x1000000215ac5?, 0x1e?, 0x3?)
	/usr/local/go/src/runtime/syscall_windows.go:457 +0x29
syscall.WaitForSingleObject(0x488, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1140 +0x5d
os.(*Process).wait(0xc00095e300?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc00095e300)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc00095e300)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
os/exec.(*Cmd).CombinedOutput(0xc00095e300)
	/usr/local/go/src/os/exec/exec.go:1021 +0x85
k8s.io/minikube/test/integration.debugLogs(0xc000161ba0, {0xc001492170, 0xb})
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:554 +0x8a05
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000161ba0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:211 +0xbac
testing.tRunner(0xc000161ba0, 0xc00053e000)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2192
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2493 [syscall]:
syscall.Syscall6(0x2709dd?, 0x2388f830eb8?, 0x59?, 0xc00140bba8?, 0x88416a?, 0x17?, 0x100002082c6?, 0x238d4f39f38?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x52c, {0xc0015d5200?, 0x200, 0x0?}, 0xc000009278?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1019 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:443
syscall.Read(0xc001604d88?, {0xc0015d5200?, 0x200?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:422 +0x2d
internal/poll.(*FD).Read(0xc001604d88, {0xc0015d5200, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b9
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0005402e0, {0xc0015d5200?, 0xc000680330?, 0x0?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc0007ff380, {0x36f6de0, 0xc000c06228})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36f6f60, 0xc0007ff380}, {0x36f6de0, 0xc000c06228}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x36f6f60, 0xc0007ff380})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x20ff36?, {0x36f6f60?, 0xc0007ff380?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x36f6f60, 0xc0007ff380}, {0x36f6ec0, 0xc0005402e0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc00140bfa8?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2193
	/usr/local/go/src/os/exec/exec.go:732 +0xa25

                                                
                                                
goroutine 112 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 111
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2217 [chan receive, 6 minutes]:
testing.(*T).Run(0xc0014171e0, {0x2a16b2c?, 0x36eecb8?}, 0xc001934870)
	/usr/local/go/src/testing/testing.go:1751 +0x392
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0014171e0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:111 +0x5be
testing.tRunner(0xc0014171e0, 0xc00053e400)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2192
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 123 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3727440)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 122
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 124 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0000d72c0, 0xc000078380)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 122
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2515 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x3730e38, 0xc0004e7e30}, {0x37248f0, 0xc00169e340}, 0x1, 0x0, 0xc001671be0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/loop.go:66 +0x1d0
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x3730e38?, 0xc0005de770?}, 0x3b9aca00, 0xc0018cfdd8?, 0x1, 0xc0018cfbe0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x3730e38, 0xc0005de770}, 0xc0005b56c0, {0xc001492570, 0xe}, {0x2a1a647, 0x7}, {0x2a20f43, 0xa}, 0xd18c2e2800)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.4(0xc0005b56c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:163 +0x3c5
testing.tRunner(0xc0005b56c0, 0xc00155de90)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2214
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 110 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0000d71d0, 0x3c)
	/usr/local/go/src/runtime/sema.go:587 +0x15d
sync.(*Cond).Wait(0xc001567d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x374d080)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0000d72c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000208560, {0x36f8800, 0xc0013d6090}, 0x1, 0xc000078380)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000208560, 0x3b9aca00, 0x0, 0x1, 0xc000078380)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 124
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 111 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3731150, 0xc000078380}, 0xc000a21f50, 0xc000a21f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3731150, 0xc000078380}, 0x90?, 0xc000a21f50, 0xc000a21f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3731150?, 0xc000078380?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000a21fd0?, 0x39f844?, 0xc000b20700?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 124
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2499 [chan receive]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008a4f80, 0xc000078380)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2465
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2498 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3727440)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2465
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2310 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc000624780)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc001417ba0)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc001417ba0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001417ba0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc001417ba0, 0xc00171e280)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2306
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2504 [IO wait]:
internal/poll.runtime_pollWait(0x238d514e160, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0x550?, 0x2b08b2e8c397b3bf?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc001458f20, 0x33bed98)
	/usr/local/go/src/internal/poll/fd_windows.go:177 +0x105
internal/poll.(*FD).Read(0xc001458f08, {0xc00177c000, 0x2000, 0x2000})
	/usr/local/go/src/internal/poll/fd_windows.go:438 +0x2a7
net.(*netFD).Read(0xc001458f08, {0xc00177c000?, 0x10?, 0xc00048b8a0?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc000c06118, {0xc00177c000?, 0xc00177c005?, 0x1a?})
	/usr/local/go/src/net/net.go:189 +0x45
crypto/tls.(*atLeastReader).Read(0xc0016355a8, {0xc00177c000?, 0x0?, 0xc0016355a8?})
	/usr/local/go/src/crypto/tls/conn.go:809 +0x3b
bytes.(*Buffer).ReadFrom(0xc000201b38, {0x36f8de0, 0xc0016355a8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc000201888, {0x238d51c4be8, 0xc0016349f0}, 0xc00048ba10?)
	/usr/local/go/src/crypto/tls/conn.go:831 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc000201888, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:629 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:591
crypto/tls.(*Conn).Read(0xc000201888, {0xc0018ea000, 0x1000, 0xc0014c28c0?})
	/usr/local/go/src/crypto/tls/conn.go:1385 +0x150
bufio.(*Reader).Read(0xc000b0b3e0, {0xc0008710e0, 0x9, 0x5022960?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x36f7000, 0xc000b0b3e0}, {0xc0008710e0, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0008710e0, 0x9, 0x27a745?}, {0x36f7000?, 0xc000b0b3e0?})
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.34.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0008710a0)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.34.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc00048bfa8)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.34.0/http2/transport.go:2505 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc000b6d500)
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.34.0/http2/transport.go:2381 +0x7c
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 2503
	/home/jenkins/go/pkg/mod/golang.org/x/net@v0.34.0/http2/transport.go:912 +0xdfb

                                                
                                                
goroutine 2368 [syscall, 4 minutes]:
syscall.Syscall6(0x2709dd?, 0x2388f830a28?, 0xc0015cf341?, 0xc00066d1a8?, 0xa0?, 0x10?, 0x100002082c6?, 0x238d4f33a20?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x47c, {0xc0009fea04?, 0x5fc, 0x0?}, 0x2?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1019 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:443
syscall.Read(0xc001604248?, {0xc0009fea04?, 0x800?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:422 +0x2d
internal/poll.(*FD).Read(0xc001604248, {0xc0009fea04, 0x5fc, 0x5fc})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b9
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000127020, {0xc0009fea04?, 0xc0000db330?, 0x204?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc0019342a0, {0x36f6de0, 0xc0000c8008})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36f6f60, 0xc0019342a0}, {0x36f6de0, 0xc0000c8008}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x36f6f60, 0xc0019342a0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x20ff36?, {0x36f6f60?, 0xc0019342a0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x36f6f60, 0xc0019342a0}, {0x36f6ec0, 0xc000127020}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc000078cb0?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2367
	/usr/local/go/src/os/exec/exec.go:732 +0xa25

                                                
                                                
goroutine 870 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3727440)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 796
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2213 [chan receive, 24 minutes]:
testing.(*testContext).waitParallel(0xc000624780)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc001416b60)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc001416b60)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001416b60)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc001416b60, 0xc00053e200)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2192
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2043 [chan receive, 24 minutes]:
testing.(*T).Run(0xc0005b4d00, {0x2a16b27?, 0xc000a31f60?}, 0xc000010000)
	/usr/local/go/src/testing/testing.go:1751 +0x392
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc0005b4d00)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xd3
testing.tRunner(0xc0005b4d00, 0x33be188)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2306 [chan receive, 22 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x489
testing.tRunner(0xc001417380, 0x33be3c0)
	/usr/local/go/src/testing/testing.go:1696 +0x104
created by testing.(*T).Run in goroutine 2115
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 723 [IO wait, 161 minutes]:
internal/poll.runtime_pollWait(0x238d514e390, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0x2c8635?, 0x2709dd?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc00142c020, 0xc0019e3b88)
	/usr/local/go/src/internal/poll/fd_windows.go:177 +0x105
internal/poll.(*FD).acceptOne(0xc00142c008, 0x578, {0xc0013741e0?, 0xc0019e3be8?, 0x2d3045?}, 0xc0019e3c1c?)
	/usr/local/go/src/internal/poll/fd_windows.go:946 +0x65
internal/poll.(*FD).Accept(0xc00142c008, 0xc0019e3d68)
	/usr/local/go/src/internal/poll/fd_windows.go:980 +0x1b6
net.(*netFD).accept(0xc00142c008)
	/usr/local/go/src/net/fd_windows.go:182 +0x4b
net.(*TCPListener).accept(0xc0008a48c0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0008a48c0)
	/usr/local/go/src/net/tcpsock.go:372 +0x30
net/http.(*Server).Serve(0xc0006905a0, {0x37242c0, 0xc0008a48c0})
	/usr/local/go/src/net/http/server.go:3330 +0x30c
net/http.(*Server).ListenAndServe(0xc0006905a0)
	/usr/local/go/src/net/http/server.go:3259 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0005b44e0)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2213 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 656
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2212 +0x129

                                                
                                                
goroutine 883 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 882
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 882 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3731150, 0xc000078380}, 0xc001529f50, 0xc001529f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3731150, 0xc000078380}, 0x90?, 0xc001529f50, 0xc001529f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3731150?, 0xc000078380?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001529fd0?, 0x39f844?, 0xc001639a40?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 871
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2216 [chan receive, 4 minutes]:
testing.(*T).Run(0xc001417040, {0x2a16b2c?, 0x36eecb8?}, 0xc001934000)
	/usr/local/go/src/testing/testing.go:1751 +0x392
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001417040)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:111 +0x5be
testing.tRunner(0xc001417040, 0xc00053e380)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2192
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2214 [chan receive]:
testing.(*T).Run(0xc001416d00, {0x2a1f0ca?, 0x36eecb8?}, 0xc00155de90)
	/usr/local/go/src/testing/testing.go:1751 +0x392
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001416d00)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:148 +0x86b
testing.tRunner(0xc001416d00, 0xc00053e280)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2192
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2391 [select, 6 minutes]:
os/exec.(*Cmd).watchCtx(0xc0014d8480, 0xc000079960)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2388
	/usr/local/go/src/os/exec/exec.go:759 +0x9e9

                                                
                                                
goroutine 2390 [syscall]:
syscall.Syscall6(0x270c45?, 0x0?, 0x0?, 0xc000000000?, 0x10?, 0x10?, 0x10100a49bc8?, 0x238d5405280?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x668, {0xc0018172a4?, 0xd5c, 0x2c9bdf?}, 0x21131e?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1019 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:443
syscall.Read(0xc000a76488?, {0xc0018172a4?, 0x8000?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:422 +0x2d
internal/poll.(*FD).Read(0xc000a76488, {0xc0018172a4, 0xd5c, 0xd5c})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b9
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000127530, {0xc0018172a4?, 0x4a2?, 0x3e3c?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc001934960, {0x36f6de0, 0xc0000c8550})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36f6f60, 0xc001934960}, {0x36f6de0, 0xc0000c8550}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000a49e78?, {0x36f6f60, 0xc001934960})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000a49f38?, {0x36f6f60?, 0xc001934960?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x36f6f60, 0xc001934960}, {0x36f6ec0, 0xc000127530}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc001970a10?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2388
	/usr/local/go/src/os/exec/exec.go:732 +0xa25

                                                
                                                
goroutine 2192 [chan receive, 24 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1651 +0x489
testing.tRunner(0xc000160000, 0xc000010000)
	/usr/local/go/src/testing/testing.go:1696 +0x104
created by testing.(*T).Run in goroutine 2043
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 948 [chan send, 151 minutes]:
os/exec.(*Cmd).watchCtx(0xc0014d8f00, 0xc0014e5030)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 947
	/usr/local/go/src/os/exec/exec.go:759 +0x9e9

                                                
                                                
goroutine 817 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc00171ec50, 0x36)
	/usr/local/go/src/runtime/sema.go:587 +0x15d
sync.(*Cond).Wait(0xc001db7d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x374d080)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00171ec80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0014964a0, {0x36f8800, 0xc00170ad20}, 0x1, 0xc000078380)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0014964a0, 0x3b9aca00, 0x0, 0x1, 0xc000078380)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 871
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2212 [chan receive, 24 minutes]:
testing.(*testContext).waitParallel(0xc000624780)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0014169c0)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0014169c0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0014169c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc0014169c0, 0xc00053e180)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2192
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 871 [chan receive, 151 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00171ec80, 0xc000078380)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 796
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2215 [chan receive, 24 minutes]:
testing.(*testContext).waitParallel(0xc000624780)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc001416ea0)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc001416ea0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001416ea0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc001416ea0, 0xc00053e300)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2192
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 1023 [chan send, 144 minutes]:
os/exec.(*Cmd).watchCtx(0xc001899e00, 0xc0017879d0)
	/usr/local/go/src/os/exec/exec.go:798 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 843
	/usr/local/go/src/os/exec/exec.go:759 +0x9e9

                                                
                                                
goroutine 2307 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc000624780)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0014176c0)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0014176c0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0014176c0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc0014176c0, 0xc00171e1c0)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2306
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2115 [chan receive, 22 minutes]:
testing.(*T).Run(0xc0005b5520, {0x2a16b27?, 0x35f6d3?}, 0x33be3c0)
	/usr/local/go/src/testing/testing.go:1751 +0x392
k8s.io/minikube/test/integration.TestStartStop(0xc0005b5520)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0005b5520, 0x33be1d0)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2418 [select, 4 minutes]:
os/exec.(*Cmd).watchCtx(0xc000215680, 0xc000b203f0)
	/usr/local/go/src/os/exec/exec.go:773 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2367
	/usr/local/go/src/os/exec/exec.go:759 +0x9e9

                                                
                                                
goroutine 2312 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc000624780)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc0004b76c0)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc0004b76c0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0004b76c0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc0004b76c0, 0xc00171e340)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2306
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2388 [syscall, 6 minutes]:
syscall.Syscall(0x10?, 0xc000a65ca8?, 0x1000000215ac5?, 0x1e?, 0x3?)
	/usr/local/go/src/runtime/syscall_windows.go:457 +0x29
syscall.WaitForSingleObject(0x7f8, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1140 +0x5d
os.(*Process).wait(0xc0014d8480?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc0014d8480)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc0014d8480)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc00175ed00, 0xc0014d8480)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc00175ed00)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc00175ed00, 0xc001934870)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2217
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2367 [syscall, 4 minutes]:
syscall.Syscall(0x10?, 0xc000a47ca8?, 0x1000000215ac5?, 0x1e?, 0x3?)
	/usr/local/go/src/runtime/syscall_windows.go:457 +0x29
syscall.WaitForSingleObject(0x248, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1140 +0x5d
os.(*Process).wait(0xc000215680?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc000215680)
	/usr/local/go/src/os/exec/exec.go:906 +0x45
os/exec.(*Cmd).Run(0xc000215680)
	/usr/local/go/src/os/exec/exec.go:610 +0x2d
k8s.io/minikube/test/integration.Run(0xc0005b4b60, 0xc000215680)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc0005b4b60)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc0005b4b60, 0xc001934000)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2216
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2211 [chan receive, 24 minutes]:
testing.(*testContext).waitParallel(0xc000624780)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc001416820)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc001416820)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001416820)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc001416820, 0xc00053e100)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2192
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2210 [chan receive, 24 minutes]:
testing.(*testContext).waitParallel(0xc000624780)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc000161d40)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc000161d40)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000161d40)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc000161d40, 0xc00053e080)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2192
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2309 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc000624780)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc001417a00)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc001417a00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001417a00)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc001417a00, 0xc00171e240)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2306
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2311 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc000624780)
	/usr/local/go/src/testing/testing.go:1818 +0xac
testing.(*T).Parallel(0xc001417d40)
	/usr/local/go/src/testing/testing.go:1485 +0x219
k8s.io/minikube/test/integration.MaybeParallel(0xc001417d40)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001417d40)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc001417d40, 0xc00171e2c0)
	/usr/local/go/src/testing/testing.go:1690 +0xcb
created by testing.(*T).Run in goroutine 2306
	/usr/local/go/src/testing/testing.go:1743 +0x377

                                                
                                                
goroutine 2369 [syscall, 4 minutes]:
syscall.Syscall6(0x270c45?, 0x2388f830598?, 0x67?, 0xc000000000?, 0x5bc1da?, 0x2?, 0x101002082c6?, 0x238d4f3a938?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x688, {0xc0015ddbc3?, 0x43d, 0x2c9bdf?}, 0x4?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1019 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:443
syscall.Read(0xc001604908?, {0xc0015ddbc3?, 0x2000?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:422 +0x2d
internal/poll.(*FD).Read(0xc001604908, {0xc0015ddbc3, 0x43d, 0x43d})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b9
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000127270, {0xc0015ddbc3?, 0xc000218b30?, 0x1000?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc001934300, {0x36f6de0, 0xc000c06018})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36f6f60, 0xc001934300}, {0x36f6de0, 0xc000c06018}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x36f6f60, 0xc001934300})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x20ff36?, {0x36f6f60?, 0xc001934300?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x36f6f60, 0xc001934300}, {0x36f6ec0, 0xc000127270}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc0014d8300?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2367
	/usr/local/go/src/os/exec/exec.go:732 +0xa25

                                                
                                                
goroutine 2389 [syscall, 2 minutes]:
syscall.Syscall6(0x270c45?, 0x2388f830eb8?, 0x24a54d?, 0x0?, 0x10?, 0x27ca57?, 0x101002082c6?, 0x238d51e4618?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x758, {0xc0016d022b?, 0x5d5, 0x2c9bdf?}, 0xc001685340?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1019 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:443
syscall.Read(0xc000a76008?, {0xc0016d022b?, 0x800?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:422 +0x2d
internal/poll.(*FD).Read(0xc000a76008, {0xc0016d022b, 0x5d5, 0x5d5})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b9
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000127508, {0xc0016d022b?, 0xc00180eb30?, 0x22a?})
	/usr/local/go/src/os/file.go:124 +0x52
bytes.(*Buffer).ReadFrom(0xc001934930, {0x36f6de0, 0xc000c060c8})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x36f6f60, 0xc001934930}, {0x36f6de0, 0xc000c060c8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x36f6f60, 0xc001934930})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0x20ff36?, {0x36f6f60?, 0xc001934930?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x36f6f60, 0xc001934930}, {0x36f6ec0, 0xc000127508}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:580 +0x34
os/exec.(*Cmd).Start.func2(0xc000109b20?)
	/usr/local/go/src/os/exec/exec.go:733 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2388
	/usr/local/go/src/os/exec/exec.go:732 +0xa25

                                                
                                                
goroutine 2337 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000985e80, 0xc000078380)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2396
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2336 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3727440)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:304 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 2396
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/delaying_queue.go:141 +0x238

                                                
                                                
goroutine 2400 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000985dd0, 0x0)
	/usr/local/go/src/runtime/sema.go:587 +0x15d
sync.(*Cond).Wait(0xc001405d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x374d080)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000985e80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00164c910, {0x36f8800, 0xc00062a090}, 0x1, 0xc000078380)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00164c910, 0x3b9aca00, 0x0, 0x1, 0xc000078380)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2337
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2401 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3731150, 0xc000078380}, 0xc0013f7f50, 0xc0013f7f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3731150, 0xc000078380}, 0x3a?, 0xc0013f7f50, 0xc0013f7f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3731150?, 0xc000078380?}, 0x205d303633353734?, 0x6165373734317830?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x203e6c696e3c2032?, 0x490a7d3e6c696e3c?, 0x3a33312037323130?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2337
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2434 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2401
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2443 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0008a4e10, 0x0)
	/usr/local/go/src/runtime/sema.go:587 +0x15d
sync.(*Cond).Wait(0xc001663d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x374d080)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/util/workqueue/queue.go:282 +0x8b
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008a4f80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:159 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00145c000, {0x36f8800, 0xc001934090}, 0x1, 0xc000078380)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00145c000, 0x3b9aca00, 0x0, 0x1, 0xc000078380)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2499
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 2444 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3731150, 0xc000078380}, 0xc001665f50, 0xc001665f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3731150, 0xc000078380}, 0x76?, 0xc001665f50, 0xc001665f98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3731150?, 0xc000078380?}, 0x616b726f7774656e?, 0x5b73726574706164?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x32383334342e3734?, 0x3938362020202038?, 0x672e6e69616d2036?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2499
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.31.3/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2445 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2444
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                    

Test pass (168/211)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 15.37
4 TestDownloadOnly/v1.20.0/preload-exists 0.07
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.91
9 TestDownloadOnly/v1.20.0/DeleteAll 1.49
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.48
12 TestDownloadOnly/v1.32.1/json-events 9.76
13 TestDownloadOnly/v1.32.1/preload-exists 0
16 TestDownloadOnly/v1.32.1/kubectl 0
17 TestDownloadOnly/v1.32.1/LogsDuration 0.73
18 TestDownloadOnly/v1.32.1/DeleteAll 1.31
19 TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds 1.26
21 TestBinaryMirror 7.29
22 TestOffline 273.68
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.29
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.3
27 TestAddons/Setup 426.51
29 TestAddons/serial/Volcano 65.52
31 TestAddons/serial/GCPAuth/Namespaces 0.34
32 TestAddons/serial/GCPAuth/FakeCredentials 10.53
35 TestAddons/parallel/Registry 34.18
36 TestAddons/parallel/Ingress 63.98
37 TestAddons/parallel/InspektorGadget 26.23
38 TestAddons/parallel/MetricsServer 21.25
40 TestAddons/parallel/CSI 86.13
41 TestAddons/parallel/Headlamp 44.8
42 TestAddons/parallel/CloudSpanner 21.98
43 TestAddons/parallel/LocalPath 32.47
44 TestAddons/parallel/NvidiaDevicePlugin 21.75
45 TestAddons/parallel/Yakd 26.85
47 TestAddons/StoppedEnableDisable 52.79
48 TestCertOptions 578.75
49 TestCertExpiration 858.56
50 TestDockerFlags 476.01
51 TestForceSystemdFlag 506.63
52 TestForceSystemdEnv 397.5
59 TestErrorSpam/start 16.53
60 TestErrorSpam/status 35.32
61 TestErrorSpam/pause 22.22
62 TestErrorSpam/unpause 22.1
63 TestErrorSpam/stop 58.85
66 TestFunctional/serial/CopySyncFile 0.04
67 TestFunctional/serial/StartWithProxy 218.19
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 121.38
70 TestFunctional/serial/KubeContext 0.13
71 TestFunctional/serial/KubectlGetPods 0.22
74 TestFunctional/serial/CacheCmd/cache/add_remote 26.06
75 TestFunctional/serial/CacheCmd/cache/add_local 10.45
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.28
77 TestFunctional/serial/CacheCmd/cache/list 0.27
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 9.17
79 TestFunctional/serial/CacheCmd/cache/cache_reload 35.43
80 TestFunctional/serial/CacheCmd/cache/delete 0.56
81 TestFunctional/serial/MinikubeKubectlCmd 0.48
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 2.31
83 TestFunctional/serial/ExtraConfig 123.29
84 TestFunctional/serial/ComponentHealth 0.18
85 TestFunctional/serial/LogsCmd 8.49
86 TestFunctional/serial/LogsFileCmd 10.27
87 TestFunctional/serial/InvalidService 20.57
89 TestFunctional/parallel/ConfigCmd 1.95
93 TestFunctional/parallel/StatusCmd 43.08
97 TestFunctional/parallel/ServiceCmdConnect 26.94
98 TestFunctional/parallel/AddonsCmd 0.78
99 TestFunctional/parallel/PersistentVolumeClaim 48.71
101 TestFunctional/parallel/SSHCmd 21.26
102 TestFunctional/parallel/CpCmd 60.38
103 TestFunctional/parallel/MySQL 60.26
104 TestFunctional/parallel/FileSync 10.17
105 TestFunctional/parallel/CertSync 61.15
109 TestFunctional/parallel/NodeLabels 0.18
111 TestFunctional/parallel/NonActiveRuntimeDisabled 10.08
113 TestFunctional/parallel/License 1.6
114 TestFunctional/parallel/ServiceCmd/DeployApp 17.47
115 TestFunctional/parallel/ServiceCmd/List 15.39
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 10
118 TestFunctional/parallel/ServiceCmd/JSONOutput 14.3
119 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 22.77
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
130 TestFunctional/parallel/ProfileCmd/profile_not_create 13.21
132 TestFunctional/parallel/ProfileCmd/profile_list 12.51
133 TestFunctional/parallel/ProfileCmd/profile_json_output 13
134 TestFunctional/parallel/Version/short 0.28
135 TestFunctional/parallel/Version/components 8.48
136 TestFunctional/parallel/ImageCommands/ImageListShort 7.73
137 TestFunctional/parallel/ImageCommands/ImageListTable 7.46
138 TestFunctional/parallel/ImageCommands/ImageListJson 7.55
139 TestFunctional/parallel/ImageCommands/ImageListYaml 7.58
140 TestFunctional/parallel/ImageCommands/ImageBuild 27.57
141 TestFunctional/parallel/ImageCommands/Setup 2.28
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 17.15
143 TestFunctional/parallel/DockerEnv/powershell 43.27
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 16.6
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 17.17
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 8.12
147 TestFunctional/parallel/UpdateContextCmd/no_changes 2.51
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.5
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.5
150 TestFunctional/parallel/ImageCommands/ImageRemove 15.52
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 15.13
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 7.48
153 TestFunctional/delete_echo-server_images 0.2
154 TestFunctional/delete_my-image_image 0.09
155 TestFunctional/delete_minikube_cached_images 0.09
159 TestMultiControlPlane/serial/StartCluster 696.48
160 TestMultiControlPlane/serial/DeployApp 13.33
162 TestMultiControlPlane/serial/AddWorkerNode 257.17
163 TestMultiControlPlane/serial/NodeLabels 0.19
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 47.79
165 TestMultiControlPlane/serial/CopyFile 628.22
166 TestMultiControlPlane/serial/StopSecondaryNode 72.61
170 TestImageBuild/serial/Setup 191.97
171 TestImageBuild/serial/NormalBuild 10.31
172 TestImageBuild/serial/BuildWithBuildArg 8.75
173 TestImageBuild/serial/BuildWithDockerIgnore 8.07
174 TestImageBuild/serial/BuildWithSpecifiedDockerfile 8.07
178 TestJSONOutput/start/Command 195.81
179 TestJSONOutput/start/Audit 0.03
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 8.44
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 8.19
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 38.71
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 1.7
206 TestMainNoArgs 0.24
207 TestMinikubeProfile 520.45
210 TestMountStart/serial/StartWithMountFirst 150.87
211 TestMountStart/serial/VerifyMountFirst 9.47
212 TestMountStart/serial/StartWithMountSecond 150.1
213 TestMountStart/serial/VerifyMountSecond 9.72
214 TestMountStart/serial/DeleteFirst 32.2
215 TestMountStart/serial/VerifyMountPostDelete 9.1
216 TestMountStart/serial/Stop 30.09
217 TestMountStart/serial/RestartStopped 114.79
218 TestMountStart/serial/VerifyMountPostStop 9.18
221 TestMultiNode/serial/FreshStart2Nodes 420.76
222 TestMultiNode/serial/DeployApp2Nodes 9.39
224 TestMultiNode/serial/AddNode 237.8
225 TestMultiNode/serial/MultiNodeLabels 0.19
226 TestMultiNode/serial/ProfileList 34.81
227 TestMultiNode/serial/CopyFile 352.27
228 TestMultiNode/serial/StopNode 74.88
229 TestMultiNode/serial/StartAfterStop 192.11
234 TestPreload 495.74
235 TestScheduledStopWindows 320.21
240 TestRunningBinaryUpgrade 1024.62
242 TestKubernetesUpgrade 1276.41
245 TestNoKubernetes/serial/StartNoK8sWithVersion 0.62
247 TestStoppedBinaryUpgrade/Setup 0.81
248 TestStoppedBinaryUpgrade/Upgrade 794.19
268 TestPause/serial/Start 379.69
269 TestStoppedBinaryUpgrade/MinikubeLogs 9.34
270 TestPause/serial/SecondStartNoReconfiguration 448.13
272 TestPause/serial/Pause 9.17
274 TestPause/serial/VerifyStatus 14.16
275 TestPause/serial/Unpause 8.4
276 TestPause/serial/PauseAgain 9.4
x
+
TestDownloadOnly/v1.20.0/json-events (15.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-117200 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-117200 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (15.3672634s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (15.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0127 10:33:13.698862    5956 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0127 10:33:13.765910    5956 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-117200
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-117200: exit status 85 (909.2689ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-117200 | minikube6\jenkins | v1.35.0 | 27 Jan 25 10:32 UTC |          |
	|         | -p download-only-117200        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 10:32:58
	Running on machine: minikube6
	Binary: Built with gc go1.23.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 10:32:58.454815    6916 out.go:345] Setting OutFile to fd 688 ...
	I0127 10:32:58.524816    6916 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 10:32:58.524816    6916 out.go:358] Setting ErrFile to fd 692...
	I0127 10:32:58.524816    6916 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0127 10:32:58.538815    6916 root.go:314] Error reading config file at C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0127 10:32:58.548811    6916 out.go:352] Setting JSON to true
	I0127 10:32:58.552806    6916 start.go:129] hostinfo: {"hostname":"minikube6","uptime":436961,"bootTime":1737537016,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5371 Build 19045.5371","kernelVersion":"10.0.19045.5371 Build 19045.5371","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0127 10:32:58.552806    6916 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0127 10:32:58.559809    6916 out.go:97] [download-only-117200] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	I0127 10:32:58.560808    6916 notify.go:220] Checking for updates...
	W0127 10:32:58.560808    6916 preload.go:293] Failed to list preload files: open C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0127 10:32:58.562809    6916 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0127 10:32:58.565818    6916 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0127 10:32:58.568808    6916 out.go:169] MINIKUBE_LOCATION=20318
	I0127 10:32:58.571917    6916 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0127 10:32:58.592322    6916 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 10:32:58.592971    6916 driver.go:394] Setting default libvirt URI to qemu:///system
	I0127 10:33:03.935103    6916 out.go:97] Using the hyperv driver based on user configuration
	I0127 10:33:03.935659    6916 start.go:297] selected driver: hyperv
	I0127 10:33:03.935740    6916 start.go:901] validating driver "hyperv" against <nil>
	I0127 10:33:03.935740    6916 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0127 10:33:03.983791    6916 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0127 10:33:03.985236    6916 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0127 10:33:03.985289    6916 cni.go:84] Creating CNI manager for ""
	I0127 10:33:03.985289    6916 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0127 10:33:03.985825    6916 start.go:340] cluster config:
	{Name:download-only-117200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-117200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube6:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0127 10:33:03.986945    6916 iso.go:125] acquiring lock: {Name:mk49322cc4182124f5e9cd1631076166a7ff463d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0127 10:33:03.990195    6916 out.go:97] Downloading VM boot image ...
	I0127 10:33:03.990195    6916 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.35.0-amd64.iso
	I0127 10:33:07.335461    6916 out.go:97] Starting "download-only-117200" primary control-plane node in "download-only-117200" cluster
	I0127 10:33:07.335461    6916 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0127 10:33:07.378637    6916 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0127 10:33:07.378637    6916 cache.go:56] Caching tarball of preloaded images
	I0127 10:33:07.379418    6916 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0127 10:33:07.382383    6916 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0127 10:33:07.382383    6916 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0127 10:33:07.455079    6916 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0127 10:33:10.242101    6916 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0127 10:33:10.243423    6916 preload.go:254] verifying checksum of C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0127 10:33:11.191534    6916 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0127 10:33:11.192668    6916 profile.go:143] Saving config to C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-117200\config.json ...
	I0127 10:33:11.193304    6916 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube6\minikube-integration\.minikube\profiles\download-only-117200\config.json: {Name:mka75f40ab2f72c984365bca6c246d0c829a7b1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0127 10:33:11.194989    6916 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0127 10:33:11.196885    6916 download.go:108] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-117200 host does not exist
	  To start a cluster, run: "minikube start -p download-only-117200"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.484659s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-117200
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-117200: (1.4745215s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/json-events (9.76s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-201500 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-201500 --force --alsologtostderr --kubernetes-version=v1.32.1 --container-runtime=docker --driver=hyperv: (9.7559299s)
--- PASS: TestDownloadOnly/v1.32.1/json-events (9.76s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/preload-exists
I0127 10:33:27.394806    5956 preload.go:131] Checking if preload exists for k8s version v1.32.1 and runtime docker
I0127 10:33:27.394806    5956 preload.go:146] Found local preload: C:\Users\jenkins.minikube6\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.1-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/kubectl
--- PASS: TestDownloadOnly/v1.32.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/LogsDuration (0.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-201500
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-201500: exit status 85 (732.2379ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-117200 | minikube6\jenkins | v1.35.0 | 27 Jan 25 10:32 UTC |                     |
	|         | -p download-only-117200        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube6\jenkins | v1.35.0 | 27 Jan 25 10:33 UTC | 27 Jan 25 10:33 UTC |
	| delete  | -p download-only-117200        | download-only-117200 | minikube6\jenkins | v1.35.0 | 27 Jan 25 10:33 UTC | 27 Jan 25 10:33 UTC |
	| start   | -o=json --download-only        | download-only-201500 | minikube6\jenkins | v1.35.0 | 27 Jan 25 10:33 UTC |                     |
	|         | -p download-only-201500        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.32.1   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/01/27 10:33:17
	Running on machine: minikube6
	Binary: Built with gc go1.23.4 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0127 10:33:17.741519    4452 out.go:345] Setting OutFile to fd 688 ...
	I0127 10:33:17.815449    4452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 10:33:17.815449    4452 out.go:358] Setting ErrFile to fd 684...
	I0127 10:33:17.815449    4452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 10:33:17.837612    4452 out.go:352] Setting JSON to true
	I0127 10:33:17.840614    4452 start.go:129] hostinfo: {"hostname":"minikube6","uptime":436981,"bootTime":1737537016,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5371 Build 19045.5371","kernelVersion":"10.0.19045.5371 Build 19045.5371","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0127 10:33:17.840699    4452 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0127 10:33:17.847027    4452 out.go:97] [download-only-201500] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	I0127 10:33:17.847687    4452 notify.go:220] Checking for updates...
	I0127 10:33:17.850038    4452 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0127 10:33:17.852855    4452 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0127 10:33:17.855186    4452 out.go:169] MINIKUBE_LOCATION=20318
	I0127 10:33:17.858504    4452 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0127 10:33:17.864516    4452 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0127 10:33:17.864740    4452 driver.go:394] Setting default libvirt URI to qemu:///system
	
	
	* The control-plane node download-only-201500 host does not exist
	  To start a cluster, run: "minikube start -p download-only-201500"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.1/LogsDuration (0.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAll (1.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.3134585s)
--- PASS: TestDownloadOnly/v1.32.1/DeleteAll (1.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (1.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-201500
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-201500: (1.264183s)
--- PASS: TestDownloadOnly/v1.32.1/DeleteAlwaysSucceeds (1.26s)

                                                
                                    
x
+
TestBinaryMirror (7.29s)

                                                
                                                
=== RUN   TestBinaryMirror
I0127 10:33:33.299338    5956 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.1/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.32.1/bin/windows/amd64/kubectl.exe.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-704900 --alsologtostderr --binary-mirror http://127.0.0.1:65315 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-704900 --alsologtostderr --binary-mirror http://127.0.0.1:65315 --driver=hyperv: (5.9809674s)
helpers_test.go:175: Cleaning up "binary-mirror-704900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-704900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p binary-mirror-704900: (1.0463231s)
--- PASS: TestBinaryMirror (7.29s)

                                                
                                    
x
+
TestOffline (273.68s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-670300 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-670300 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (3m47.2549669s)
helpers_test.go:175: Cleaning up "offline-docker-670300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-670300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-670300: (46.4221221s)
--- PASS: TestOffline (273.68s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.29s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-226100
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-226100: exit status 85 (288.4977ms)

                                                
                                                
-- stdout --
	* Profile "addons-226100" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-226100"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.29s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.3s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-226100
addons_test.go:950: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-226100: exit status 85 (303.1033ms)

                                                
                                                
-- stdout --
	* Profile "addons-226100" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-226100"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.30s)

                                                
                                    
x
+
TestAddons/Setup (426.51s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-226100 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=hyperv --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-226100 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=hyperv --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (7m6.5080553s)
--- PASS: TestAddons/Setup (426.51s)

                                                
                                    
x
+
TestAddons/serial/Volcano (65.52s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:807: volcano-scheduler stabilized in 24.7739ms
addons_test.go:823: volcano-controller stabilized in 24.7739ms
addons_test.go:815: volcano-admission stabilized in 24.7739ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-75fdd99bcf-5vzqw" [77dbb735-e659-41f9-be49-3d094d4d836a] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.0067615s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-75d8f6b5c-bbzhd" [80ec78c1-8fbc-47f7-9374-bd71380e524d] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.0062483s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-86bdc5c9c-9kh7t" [e2880d44-776b-4e7d-9117-73aef0142fc3] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.0076775s
addons_test.go:842: (dbg) Run:  kubectl --context addons-226100 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-226100 create -f testdata\vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-226100 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [14927d9e-f1ed-4ce3-9498-4dfe0299da3d] Pending
helpers_test.go:344: "test-job-nginx-0" [14927d9e-f1ed-4ce3-9498-4dfe0299da3d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [14927d9e-f1ed-4ce3-9498-4dfe0299da3d] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 22.0087439s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-226100 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-226100 addons disable volcano --alsologtostderr -v=1: (25.6163283s)
--- PASS: TestAddons/serial/Volcano (65.52s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-226100 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-226100 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.53s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-226100 create -f testdata\busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-226100 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [05b97817-abf8-4f5d-8473-353ac5beb5c2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [05b97817-abf8-4f5d-8473-353ac5beb5c2] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.0081535s
addons_test.go:633: (dbg) Run:  kubectl --context addons-226100 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-226100 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-226100 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-226100 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.53s)

                                                
                                    
x
+
TestAddons/parallel/Registry (34.18s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 9.6501ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-z72t8" [9184e240-12c3-4cfa-8f90-b0bfafeb4e08] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0062303s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-v44dp" [199f0748-0224-466e-a1d0-41f0ab975897] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0125261s
addons_test.go:331: (dbg) Run:  kubectl --context addons-226100 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-226100 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-226100 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.6937054s)
addons_test.go:350: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-226100 ip
addons_test.go:350: (dbg) Done: out/minikube-windows-amd64.exe -p addons-226100 ip: (2.774866s)
2025/01/27 10:42:50 [DEBUG] GET http://172.29.193.248:5000
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-226100 addons disable registry --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-226100 addons disable registry --alsologtostderr -v=1: (16.4327795s)
--- PASS: TestAddons/parallel/Registry (34.18s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (63.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-226100 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-226100 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-226100 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [97d3d799-94ca-46ee-bb25-0fe375c001aa] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [97d3d799-94ca-46ee-bb25-0fe375c001aa] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.00829s
I0127 10:43:48.176590    5956 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-226100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-226100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (9.3624099s)
addons_test.go:286: (dbg) Run:  kubectl --context addons-226100 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-226100 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p addons-226100 ip: (2.625165s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.29.193.248
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-226100 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-226100 addons disable ingress-dns --alsologtostderr -v=1: (15.2916084s)
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-226100 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-226100 addons disable ingress --alsologtostderr -v=1: (21.7147261s)
--- PASS: TestAddons/parallel/Ingress (63.98s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (26.23s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-dj8z5" [d45d41c9-16bf-428b-a9df-c280e34036f4] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0194736s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-226100 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-226100 addons disable inspektor-gadget --alsologtostderr -v=1: (21.2058645s)
--- PASS: TestAddons/parallel/InspektorGadget (26.23s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (21.25s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 16.7851ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-xtw2b" [6bc75322-fad5-40ef-bbc0-ea6e6a00bf6a] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0062942s
addons_test.go:402: (dbg) Run:  kubectl --context addons-226100 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-226100 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-226100 addons disable metrics-server --alsologtostderr -v=1: (15.9691451s)
--- PASS: TestAddons/parallel/MetricsServer (21.25s)

                                                
                                    
x
+
TestAddons/parallel/CSI (86.13s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0127 10:43:05.475026    5956 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0127 10:43:05.486462    5956 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0127 10:43:05.486530    5956 kapi.go:107] duration metric: took 11.475ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 11.5042ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-226100 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-226100 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [26f5b687-5ef7-46a3-932e-5ff2d3014787] Pending
helpers_test.go:344: "task-pv-pod" [26f5b687-5ef7-46a3-932e-5ff2d3014787] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [26f5b687-5ef7-46a3-932e-5ff2d3014787] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.006687s
addons_test.go:511: (dbg) Run:  kubectl --context addons-226100 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-226100 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-226100 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-226100 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-226100 delete pod task-pv-pod: (1.1529262s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-226100 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-226100 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-226100 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [1de5221e-506f-4b61-9eda-e2703c9de899] Pending
helpers_test.go:344: "task-pv-pod-restore" [1de5221e-506f-4b61-9eda-e2703c9de899] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [1de5221e-506f-4b61-9eda-e2703c9de899] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.0094636s
addons_test.go:553: (dbg) Run:  kubectl --context addons-226100 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-226100 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-226100 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-226100 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-226100 addons disable volumesnapshots --alsologtostderr -v=1: (15.3594701s)
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-226100 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-226100 addons disable csi-hostpath-driver --alsologtostderr -v=1: (21.1566583s)
--- PASS: TestAddons/parallel/CSI (86.13s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (44.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-226100 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-226100 --alsologtostderr -v=1: (16.0843926s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-69d78d796f-gch7t" [ab6fabd0-799f-4e2c-aaec-bcd4fda1559e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-69d78d796f-gch7t" [ab6fabd0-799f-4e2c-aaec-bcd4fda1559e] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 21.0061242s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-226100 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-226100 addons disable headlamp --alsologtostderr -v=1: (7.7054478s)
--- PASS: TestAddons/parallel/Headlamp (44.80s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (21.98s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5d76cffbc-vsckd" [0d5a25f3-87ee-4289-b4f6-a33e67d2cfcd] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004232s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-226100 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-226100 addons disable cloud-spanner --alsologtostderr -v=1: (15.9571858s)
--- PASS: TestAddons/parallel/CloudSpanner (21.98s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (32.47s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-226100 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-226100 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-226100 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [a3498d50-08ff-4dfe-b7a1-e244288f8333] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [a3498d50-08ff-4dfe-b7a1-e244288f8333] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [a3498d50-08ff-4dfe-b7a1-e244288f8333] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.0059918s
addons_test.go:906: (dbg) Run:  kubectl --context addons-226100 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-226100 ssh "cat /opt/local-path-provisioner/pvc-ea6fa48e-501b-4b73-b3a0-2145b4fdc8ce_default_test-pvc/file1"
addons_test.go:915: (dbg) Done: out/minikube-windows-amd64.exe -p addons-226100 ssh "cat /opt/local-path-provisioner/pvc-ea6fa48e-501b-4b73-b3a0-2145b4fdc8ce_default_test-pvc/file1": (10.629574s)
addons_test.go:927: (dbg) Run:  kubectl --context addons-226100 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-226100 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-226100 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-226100 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (8.1685861s)
--- PASS: TestAddons/parallel/LocalPath (32.47s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (21.75s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-qszvl" [7db69309-7eb5-4b9b-85a0-0b1dd952c5fe] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0072354s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-226100 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-226100 addons disable nvidia-device-plugin --alsologtostderr -v=1: (15.738996s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (21.75s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (26.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-vxlhb" [a5913386-f664-487a-b2b9-0020352f9a91] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.006306s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-226100 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-226100 addons disable yakd --alsologtostderr -v=1: (20.8425102s)
--- PASS: TestAddons/parallel/Yakd (26.85s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (52.79s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-226100
addons_test.go:170: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-226100: (40.3359097s)
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-226100
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-226100: (4.8831099s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-226100
addons_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-226100: (4.6247228s)
addons_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-226100
addons_test.go:183: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-226100: (2.9463008s)
--- PASS: TestAddons/StoppedEnableDisable (52.79s)

                                                
                                    
x
+
TestCertOptions (578.75s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-152800 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
E0127 13:17:04.067196    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-152800 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (8m34.6047501s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-152800 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-152800 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (10.4089691s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-152800 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-152800 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-152800 -- "sudo cat /etc/kubernetes/admin.conf": (10.8720033s)
helpers_test.go:175: Cleaning up "cert-options-152800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-152800
E0127 13:25:47.513860    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-152800: (42.6787382s)
--- PASS: TestCertOptions (578.75s)

                                                
                                    
x
+
TestCertExpiration (858.56s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-934800 --memory=2048 --cert-expiration=3m --driver=hyperv
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-934800 --memory=2048 --cert-expiration=3m --driver=hyperv: (6m24.0557769s)
E0127 13:20:47.509476    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 13:22:04.069551    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-934800 --memory=2048 --cert-expiration=8760h --driver=hyperv
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-934800 --memory=2048 --cert-expiration=8760h --driver=hyperv: (4m9.1896515s)
helpers_test.go:175: Cleaning up "cert-expiration-934800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-934800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-934800: (45.3129741s)
--- PASS: TestCertExpiration (858.56s)

                                                
                                    
x
+
TestDockerFlags (476.01s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-842200 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-842200 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (6m48.7331857s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-842200 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-842200 ssh "sudo systemctl show docker --property=Environment --no-pager": (10.3211192s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-842200 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-842200 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (10.234512s)
helpers_test.go:175: Cleaning up "docker-flags-842200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-842200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-842200: (46.7216482s)
--- PASS: TestDockerFlags (476.01s)

                                                
                                    
x
+
TestForceSystemdFlag (506.63s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-580400 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
E0127 13:00:47.496892    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-580400 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (7m29.3417992s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-580400 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-580400 ssh "docker info --format {{.CgroupDriver}}": (9.7457103s)
helpers_test.go:175: Cleaning up "force-systemd-flag-580400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-580400
E0127 13:08:27.153290    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-580400: (47.5378043s)
--- PASS: TestForceSystemdFlag (506.63s)

                                                
                                    
x
+
TestForceSystemdEnv (397.5s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-667400 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-667400 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (5m36.915982s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-667400 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-667400 ssh "docker info --format {{.CgroupDriver}}": (10.4060337s)
helpers_test.go:175: Cleaning up "force-systemd-env-667400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-667400
E0127 13:15:47.506784    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-667400: (50.1733865s)
--- PASS: TestForceSystemdEnv (397.50s)

                                                
                                    
x
+
TestErrorSpam/start (16.53s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 start --dry-run: (5.4515623s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 start --dry-run: (5.5581718s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 start --dry-run: (5.5188039s)
--- PASS: TestErrorSpam/start (16.53s)

                                                
                                    
x
+
TestErrorSpam/status (35.32s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 status: (12.1599741s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 status: (11.5793273s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 status: (11.5779951s)
--- PASS: TestErrorSpam/status (35.32s)

                                                
                                    
x
+
TestErrorSpam/pause (22.22s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 pause: (7.6404071s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 pause: (7.3935234s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 pause: (7.1802163s)
--- PASS: TestErrorSpam/pause (22.22s)

                                                
                                    
x
+
TestErrorSpam/unpause (22.1s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 unpause: (7.3428461s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 unpause
E0127 10:50:47.416336    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 10:50:47.422799    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 10:50:47.434276    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 10:50:47.455866    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 10:50:47.498021    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 10:50:47.580677    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 10:50:47.743894    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 10:50:48.065843    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 10:50:48.708263    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 10:50:49.990352    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 10:50:52.552150    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 unpause: (7.3479271s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 unpause
E0127 10:50:57.674486    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 unpause: (7.4040467s)
--- PASS: TestErrorSpam/unpause (22.10s)

                                                
                                    
x
+
TestErrorSpam/stop (58.85s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 stop
E0127 10:51:07.917153    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 10:51:28.400000    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 stop: (37.8180697s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 stop: (10.6391885s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-762000 --log_dir C:\Users\jenkins.minikube6\AppData\Local\Temp\nospam-762000 stop: (10.3846534s)
--- PASS: TestErrorSpam/stop (58.85s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: C:\Users\jenkins.minikube6\minikube-integration\.minikube\files\etc\test\nested\copy\5956\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (218.19s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-253500 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0127 10:53:31.286385    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 10:55:47.418544    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-253500 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m38.1834371s)
--- PASS: TestFunctional/serial/StartWithProxy (218.19s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (121.38s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0127 10:55:53.969117    5956 config.go:182] Loaded profile config "functional-253500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
functional_test.go:659: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-253500 --alsologtostderr -v=8
E0127 10:56:15.130386    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-253500 --alsologtostderr -v=8: (2m1.3725091s)
functional_test.go:663: soft start took 2m1.3748103s for "functional-253500" cluster.
I0127 10:57:55.344096    5956 config.go:182] Loaded profile config "functional-253500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/SoftStart (121.38s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.13s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-253500 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (26.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 cache add registry.k8s.io/pause:3.1: (8.7247629s)
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 cache add registry.k8s.io/pause:3.3: (8.8608678s)
functional_test.go:1049: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 cache add registry.k8s.io/pause:latest: (8.4779285s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (26.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (10.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-253500 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1964656440\001
functional_test.go:1077: (dbg) Done: docker build -t minikube-local-cache-test:functional-253500 C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1964656440\001: (1.8407988s)
functional_test.go:1089: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 cache add minikube-local-cache-test:functional-253500
functional_test.go:1089: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 cache add minikube-local-cache-test:functional-253500: (8.1835273s)
functional_test.go:1094: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 cache delete minikube-local-cache-test:functional-253500
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-253500
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (10.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 ssh sudo crictl images
functional_test.go:1124: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 ssh sudo crictl images: (9.1695983s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (35.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1147: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 ssh sudo docker rmi registry.k8s.io/pause:latest: (9.1141519s)
functional_test.go:1153: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-253500 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (9.1531033s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 cache reload: (7.9834776s)
functional_test.go:1163: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (9.175498s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (35.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.56s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 kubectl -- --context functional-253500 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.48s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (2.31s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out\kubectl.exe --context functional-253500 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (2.31s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (123.29s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-253500 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0127 11:00:47.422951    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-253500 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m3.283286s)
functional_test.go:761: restart took 2m3.283286s for "functional-253500" cluster.
I0127 11:01:24.002396    5956 config.go:182] Loaded profile config "functional-253500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
--- PASS: TestFunctional/serial/ExtraConfig (123.29s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-253500 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (8.49s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 logs
functional_test.go:1236: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 logs: (8.4903125s)
--- PASS: TestFunctional/serial/LogsCmd (8.49s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (10.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1954498395\001\logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 logs --file C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalserialLogsFileCmd1954498395\001\logs.txt: (10.2674082s)
--- PASS: TestFunctional/serial/LogsFileCmd (10.27s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (20.57s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-253500 apply -f testdata\invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-253500
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-253500: exit status 115 (16.1202809s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://172.29.200.214:31320 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube_service_8fb87d8e79e761d215f3221b4a4d8a6300edfb06_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-253500 delete -f testdata\invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-253500 delete -f testdata\invalidsvc.yaml: (1.04428s)
--- PASS: TestFunctional/serial/InvalidService (20.57s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-253500 config get cpus: exit status 14 (276.7748ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-253500 config get cpus: exit status 14 (251.5666ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.95s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (43.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 status
functional_test.go:854: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 status: (13.4856713s)
functional_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (14.6406062s)
functional_test.go:872: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 status -o json
functional_test.go:872: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 status -o json: (14.951666s)
--- PASS: TestFunctional/parallel/StatusCmd (43.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (26.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-253500 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-253500 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-kld5r" [c62e0050-e6ca-4c86-a401-9418f7cc17b5] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-kld5r" [c62e0050-e6ca-4c86-a401-9418f7cc17b5] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.0067466s
functional_test.go:1649: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 service hello-node-connect --url
functional_test.go:1649: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 service hello-node-connect --url: (18.4582077s)
functional_test.go:1655: found endpoint for hello-node-connect: http://172.29.200.214:31945
functional_test.go:1675: http://172.29.200.214:31945: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-kld5r

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.29.200.214:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.29.200.214:31945
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (26.94s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (48.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [697c2539-6566-4993-9243-8a2b067b53a1] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.012522s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-253500 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-253500 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-253500 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-253500 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [db777fb9-448b-4768-93ea-728915abe56a] Pending
helpers_test.go:344: "sp-pod" [db777fb9-448b-4768-93ea-728915abe56a] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [db777fb9-448b-4768-93ea-728915abe56a] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 33.0060718s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-253500 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-253500 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-253500 delete -f testdata/storage-provisioner/pod.yaml: (1.5213378s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-253500 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c81fa1f1-d10b-419f-8135-1f904294d8d8] Pending
helpers_test.go:344: "sp-pod" [c81fa1f1-d10b-419f-8135-1f904294d8d8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c81fa1f1-d10b-419f-8135-1f904294d8d8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.0072246s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-253500 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (48.71s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (21.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 ssh "echo hello"
functional_test.go:1725: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 ssh "echo hello": (10.4918973s)
functional_test.go:1742: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 ssh "cat /etc/hostname": (10.7630292s)
--- PASS: TestFunctional/parallel/SSHCmd (21.26s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (60.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 cp testdata\cp-test.txt /home/docker/cp-test.txt: (8.3273559s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 ssh -n functional-253500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 ssh -n functional-253500 "sudo cat /home/docker/cp-test.txt": (10.3500498s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 cp functional-253500:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd3505736301\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 cp functional-253500:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestFunctionalparallelCpCmd3505736301\001\cp-test.txt: (12.0062864s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 ssh -n functional-253500 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 ssh -n functional-253500 "sudo cat /home/docker/cp-test.txt": (11.4374985s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (8.3767243s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 ssh -n functional-253500 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 ssh -n functional-253500 "sudo cat /tmp/does/not/exist/cp-test.txt": (9.8776942s)
--- PASS: TestFunctional/parallel/CpCmd (60.38s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (60.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-253500 replace --force -f testdata\mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-hqwj2" [a64c0631-12c6-457f-bea8-2b8c6696df0c] Pending
helpers_test.go:344: "mysql-58ccfd96bb-hqwj2" [a64c0631-12c6-457f-bea8-2b8c6696df0c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-hqwj2" [a64c0631-12c6-457f-bea8-2b8c6696df0c] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 46.0139842s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-253500 exec mysql-58ccfd96bb-hqwj2 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-253500 exec mysql-58ccfd96bb-hqwj2 -- mysql -ppassword -e "show databases;": exit status 1 (310.6311ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 11:05:50.373405    5956 retry.go:31] will retry after 1.149996423s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-253500 exec mysql-58ccfd96bb-hqwj2 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-253500 exec mysql-58ccfd96bb-hqwj2 -- mysql -ppassword -e "show databases;": exit status 1 (310.9737ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 11:05:51.844840    5956 retry.go:31] will retry after 949.37401ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-253500 exec mysql-58ccfd96bb-hqwj2 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-253500 exec mysql-58ccfd96bb-hqwj2 -- mysql -ppassword -e "show databases;": exit status 1 (279.3559ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 11:05:53.083170    5956 retry.go:31] will retry after 2.321486352s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-253500 exec mysql-58ccfd96bb-hqwj2 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-253500 exec mysql-58ccfd96bb-hqwj2 -- mysql -ppassword -e "show databases;": exit status 1 (323.2298ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 11:05:55.743410    5956 retry.go:31] will retry after 3.601440714s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-253500 exec mysql-58ccfd96bb-hqwj2 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-253500 exec mysql-58ccfd96bb-hqwj2 -- mysql -ppassword -e "show databases;": exit status 1 (288.1771ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0127 11:05:59.643689    5956 retry.go:31] will retry after 3.902221232s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-253500 exec mysql-58ccfd96bb-hqwj2 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (60.26s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (10.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/5956/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 ssh "sudo cat /etc/test/nested/copy/5956/hosts"
functional_test.go:1931: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 ssh "sudo cat /etc/test/nested/copy/5956/hosts": (10.1707589s)
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (10.17s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (61.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/5956.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 ssh "sudo cat /etc/ssl/certs/5956.pem"
functional_test.go:1973: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 ssh "sudo cat /etc/ssl/certs/5956.pem": (10.0285471s)
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/5956.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 ssh "sudo cat /usr/share/ca-certificates/5956.pem"
functional_test.go:1973: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 ssh "sudo cat /usr/share/ca-certificates/5956.pem": (10.4047595s)
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1973: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 ssh "sudo cat /etc/ssl/certs/51391683.0": (10.49983s)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/59562.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 ssh "sudo cat /etc/ssl/certs/59562.pem"
functional_test.go:2000: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 ssh "sudo cat /etc/ssl/certs/59562.pem": (10.2372596s)
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/59562.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 ssh "sudo cat /usr/share/ca-certificates/59562.pem"
functional_test.go:2000: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 ssh "sudo cat /usr/share/ca-certificates/59562.pem": (9.8857218s)
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2000: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (10.0961033s)
--- PASS: TestFunctional/parallel/CertSync (61.15s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-253500 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (10.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-253500 ssh "sudo systemctl is-active crio": exit status 1 (10.0796226s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (10.08s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2288: (dbg) Done: out/minikube-windows-amd64.exe license: (1.578051s)
--- PASS: TestFunctional/parallel/License (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (17.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-253500 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-253500 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-fkhkb" [7e921330-f118-4069-bf04-b25f8b9a8705] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-fkhkb" [7e921330-f118-4069-bf04-b25f8b9a8705] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 17.0071383s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (17.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (15.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 service list
functional_test.go:1459: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 service list: (15.390332s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (15.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (10s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-253500 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-253500 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-253500 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 11548: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 12176: TerminateProcess: Access is denied.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-253500 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (10.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (14.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 service list -o json
functional_test.go:1489: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 service list -o json: (14.2962432s)
functional_test.go:1494: Took "14.2969638s" to run "out/minikube-windows-amd64.exe -p functional-253500 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (14.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-253500 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (22.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-253500 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [7420c829-d356-4285-9ecb-e567b756231f] Pending
helpers_test.go:344: "nginx-svc" [7420c829-d356-4285-9ecb-e567b756231f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [7420c829-d356-4285-9ecb-e567b756231f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 22.005597s
I0127 11:02:59.505803    5956 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (22.77s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-253500 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 11704: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (13.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1275: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (12.8252722s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (13.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (12.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1310: (dbg) Done: out/minikube-windows-amd64.exe profile list: (12.2559241s)
functional_test.go:1315: Took "12.2564297s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1329: Took "249.6077ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (12.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1361: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (12.738408s)
functional_test.go:1366: Took "12.7387817s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1379: Took "264.7925ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (13.00s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 version --short
--- PASS: TestFunctional/parallel/Version/short (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (8.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 version -o=json --components: (8.4827061s)
--- PASS: TestFunctional/parallel/Version/components (8.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (7.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 image ls --format short --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 image ls --format short --alsologtostderr: (7.7329795s)
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-253500 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.1
registry.k8s.io/kube-proxy:v1.32.1
registry.k8s.io/kube-controller-manager:v1.32.1
registry.k8s.io/kube-apiserver:v1.32.1
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-253500
docker.io/kicbase/echo-server:functional-253500
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-253500 image ls --format short --alsologtostderr:
I0127 11:05:38.892698    2204 out.go:345] Setting OutFile to fd 1564 ...
I0127 11:05:39.009958    2204 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:05:39.009958    2204 out.go:358] Setting ErrFile to fd 1356...
I0127 11:05:39.009958    2204 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:05:39.082348    2204 config.go:182] Loaded profile config "functional-253500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0127 11:05:39.082893    2204 config.go:182] Loaded profile config "functional-253500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0127 11:05:39.083781    2204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-253500 ).state
I0127 11:05:41.355401    2204 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0127 11:05:41.355401    2204 main.go:141] libmachine: [stderr =====>] : 
I0127 11:05:41.367265    2204 ssh_runner.go:195] Run: systemctl --version
I0127 11:05:41.367265    2204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-253500 ).state
I0127 11:05:43.603415    2204 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0127 11:05:43.603483    2204 main.go:141] libmachine: [stderr =====>] : 
I0127 11:05:43.603483    2204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-253500 ).networkadapters[0]).ipaddresses[0]
I0127 11:05:46.295073    2204 main.go:141] libmachine: [stdout =====>] : 172.29.200.214

                                                
                                                
I0127 11:05:46.295073    2204 main.go:141] libmachine: [stderr =====>] : 
I0127 11:05:46.295699    2204 sshutil.go:53] new ssh client: &{IP:172.29.200.214 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-253500\id_rsa Username:docker}
I0127 11:05:46.397602    2204 ssh_runner.go:235] Completed: systemctl --version: (5.0302844s)
I0127 11:05:46.407572    2204 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (7.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (7.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 image ls --format table --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 image ls --format table --alsologtostderr: (7.4554066s)
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-253500 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.32.1           | 2b0d6572d062c | 69.6MB |
| registry.k8s.io/kube-controller-manager     | v1.32.1           | 019ee182b58e2 | 89.7MB |
| registry.k8s.io/kube-proxy                  | v1.32.1           | e29f9c7391fd9 | 94MB   |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/kicbase/echo-server               | functional-253500 | 9056ab77afb8e | 4.94MB |
| docker.io/library/minikube-local-cache-test | functional-253500 | 6d45600b125f9 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.32.1           | 95c0bda56fc4d | 97MB   |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/nginx                     | latest            | 9bea9f2796e23 | 192MB  |
| registry.k8s.io/etcd                        | 3.5.16-0          | a9e7e6b294baf | 150MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | alpine            | 93f9c72967dbc | 47MB   |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-253500 image ls --format table --alsologtostderr:
I0127 11:05:54.022713    2072 out.go:345] Setting OutFile to fd 1352 ...
I0127 11:05:54.099854    2072 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:05:54.099854    2072 out.go:358] Setting ErrFile to fd 1476...
I0127 11:05:54.099854    2072 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:05:54.117447    2072 config.go:182] Loaded profile config "functional-253500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0127 11:05:54.117898    2072 config.go:182] Loaded profile config "functional-253500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0127 11:05:54.118929    2072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-253500 ).state
I0127 11:05:56.305122    2072 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0127 11:05:56.305122    2072 main.go:141] libmachine: [stderr =====>] : 
I0127 11:05:56.320069    2072 ssh_runner.go:195] Run: systemctl --version
I0127 11:05:56.320069    2072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-253500 ).state
I0127 11:05:58.501480    2072 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0127 11:05:58.501966    2072 main.go:141] libmachine: [stderr =====>] : 
I0127 11:05:58.502216    2072 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-253500 ).networkadapters[0]).ipaddresses[0]
I0127 11:06:01.163973    2072 main.go:141] libmachine: [stdout =====>] : 172.29.200.214

                                                
                                                
I0127 11:06:01.163973    2072 main.go:141] libmachine: [stderr =====>] : 
I0127 11:06:01.163973    2072 sshutil.go:53] new ssh client: &{IP:172.29.200.214 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-253500\id_rsa Username:docker}
I0127 11:06:01.257034    2072 ssh_runner.go:235] Completed: systemctl --version: (4.9369137s)
I0127 11:06:01.265613    2072 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (7.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (7.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 image ls --format json --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 image ls --format json --alsologtostderr: (7.5495523s)
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-253500 image ls --format json --alsologtostderr:
[{"id":"95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.1"],"size":"97000000"},{"id":"019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.1"],"size":"89700000"},{"id":"e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.32.1"],"size":"94000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.1"],"size":"69600000"},{"id":"93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3
c83a409e3","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-253500"],"size":"4940000"},{"id":"6d45600b125f9aeb7a16c86150f0981d62cb1d4b7229343d771cce5ce8eaa4f3","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-253500"],"size":"30"},{"id":"9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1","repoDigests":[],"repoTags":["docker.io/library/ng
inx:latest"],"size":"192000000"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"150000000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-253500 image ls --format json --alsologtostderr:
I0127 11:05:46.472084   12856 out.go:345] Setting OutFile to fd 1512 ...
I0127 11:05:46.543077   12856 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:05:46.543077   12856 out.go:358] Setting ErrFile to fd 948...
I0127 11:05:46.543077   12856 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:05:46.560150   12856 config.go:182] Loaded profile config "functional-253500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0127 11:05:46.561079   12856 config.go:182] Loaded profile config "functional-253500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0127 11:05:46.561079   12856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-253500 ).state
I0127 11:05:48.803847   12856 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0127 11:05:48.803908   12856 main.go:141] libmachine: [stderr =====>] : 
I0127 11:05:48.817166   12856 ssh_runner.go:195] Run: systemctl --version
I0127 11:05:48.817166   12856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-253500 ).state
I0127 11:05:51.029666   12856 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0127 11:05:51.029666   12856 main.go:141] libmachine: [stderr =====>] : 
I0127 11:05:51.029666   12856 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-253500 ).networkadapters[0]).ipaddresses[0]
I0127 11:05:53.696061   12856 main.go:141] libmachine: [stdout =====>] : 172.29.200.214

                                                
                                                
I0127 11:05:53.696061   12856 main.go:141] libmachine: [stderr =====>] : 
I0127 11:05:53.696657   12856 sshutil.go:53] new ssh client: &{IP:172.29.200.214 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-253500\id_rsa Username:docker}
I0127 11:05:53.799950   12856 ssh_runner.go:235] Completed: systemctl --version: (4.9826446s)
I0127 11:05:53.810991   12856 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (7.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (7.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 image ls --format yaml --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 image ls --format yaml --alsologtostderr: (7.5754716s)
functional_test.go:266: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-253500 image ls --format yaml --alsologtostderr:
- id: 95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.1
size: "97000000"
- id: 2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.1
size: "69600000"
- id: 93f9c72967dbcfaffe724ae5ba471e9568c9bbe67271f53266c84f3c83a409e3
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.1
size: "89700000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 9bea9f2796e236cb18c2b3ad561ff29f655d1001f9ec7247a0bc5e08d25652a1
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "150000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-253500
size: "4940000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 6d45600b125f9aeb7a16c86150f0981d62cb1d4b7229343d771cce5ce8eaa4f3
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-253500
size: "30"
- id: e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.32.1
size: "94000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-253500 image ls --format yaml --alsologtostderr:
I0127 11:05:38.893699    2640 out.go:345] Setting OutFile to fd 1148 ...
I0127 11:05:38.971685    2640 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:05:38.971685    2640 out.go:358] Setting ErrFile to fd 1360...
I0127 11:05:38.971685    2640 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:05:38.989676    2640 config.go:182] Loaded profile config "functional-253500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0127 11:05:38.989676    2640 config.go:182] Loaded profile config "functional-253500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0127 11:05:38.990684    2640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-253500 ).state
I0127 11:05:41.279164    2640 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0127 11:05:41.279223    2640 main.go:141] libmachine: [stderr =====>] : 
I0127 11:05:41.298668    2640 ssh_runner.go:195] Run: systemctl --version
I0127 11:05:41.298668    2640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-253500 ).state
I0127 11:05:43.487449    2640 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0127 11:05:43.487798    2640 main.go:141] libmachine: [stderr =====>] : 
I0127 11:05:43.487876    2640 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-253500 ).networkadapters[0]).ipaddresses[0]
I0127 11:05:46.151833    2640 main.go:141] libmachine: [stdout =====>] : 172.29.200.214

                                                
                                                
I0127 11:05:46.151833    2640 main.go:141] libmachine: [stderr =====>] : 
I0127 11:05:46.153287    2640 sshutil.go:53] new ssh client: &{IP:172.29.200.214 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-253500\id_rsa Username:docker}
I0127 11:05:46.255471    2640 ssh_runner.go:235] Completed: systemctl --version: (4.956654s)
I0127 11:05:46.265628    2640 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (7.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (27.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 ssh pgrep buildkitd
E0127 11:05:47.425442    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:308: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-253500 ssh pgrep buildkitd: exit status 1 (9.6596242s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 image build -t localhost/my-image:functional-253500 testdata\build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 image build -t localhost/my-image:functional-253500 testdata\build --alsologtostderr: (10.911253s)
functional_test.go:323: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-253500 image build -t localhost/my-image:functional-253500 testdata\build --alsologtostderr:
I0127 11:05:56.283070    9524 out.go:345] Setting OutFile to fd 1044 ...
I0127 11:05:56.396375    9524 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:05:56.396375    9524 out.go:358] Setting ErrFile to fd 1444...
I0127 11:05:56.396375    9524 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0127 11:05:56.413434    9524 config.go:182] Loaded profile config "functional-253500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0127 11:05:56.436083    9524 config.go:182] Loaded profile config "functional-253500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
I0127 11:05:56.438041    9524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-253500 ).state
I0127 11:05:58.672785    9524 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0127 11:05:58.672785    9524 main.go:141] libmachine: [stderr =====>] : 
I0127 11:05:58.687474    9524 ssh_runner.go:195] Run: systemctl --version
I0127 11:05:58.687474    9524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-253500 ).state
I0127 11:06:00.939229    9524 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0127 11:06:00.939229    9524 main.go:141] libmachine: [stderr =====>] : 
I0127 11:06:00.939229    9524 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-253500 ).networkadapters[0]).ipaddresses[0]
I0127 11:06:03.302342    9524 main.go:141] libmachine: [stdout =====>] : 172.29.200.214

                                                
                                                
I0127 11:06:03.302735    9524 main.go:141] libmachine: [stderr =====>] : 
I0127 11:06:03.303358    9524 sshutil.go:53] new ssh client: &{IP:172.29.200.214 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\functional-253500\id_rsa Username:docker}
I0127 11:06:03.401342    9524 ssh_runner.go:235] Completed: systemctl --version: (4.713819s)
I0127 11:06:03.401459    9524 build_images.go:161] Building image from path: C:\Users\jenkins.minikube6\AppData\Local\Temp\build.2263649592.tar
I0127 11:06:03.412471    9524 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0127 11:06:03.441472    9524 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2263649592.tar
I0127 11:06:03.449698    9524 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2263649592.tar: stat -c "%s %y" /var/lib/minikube/build/build.2263649592.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2263649592.tar': No such file or directory
I0127 11:06:03.449698    9524 ssh_runner.go:362] scp C:\Users\jenkins.minikube6\AppData\Local\Temp\build.2263649592.tar --> /var/lib/minikube/build/build.2263649592.tar (3072 bytes)
I0127 11:06:03.507710    9524 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2263649592
I0127 11:06:03.541334    9524 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2263649592 -xf /var/lib/minikube/build/build.2263649592.tar
I0127 11:06:03.567420    9524 docker.go:360] Building image: /var/lib/minikube/build/build.2263649592
I0127 11:06:03.580121    9524 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-253500 /var/lib/minikube/build/build.2263649592
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.9s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#4 ...

                                                
                                                
#5 [internal] load build context
#5 transferring context: 62B done
#5 DONE 0.1s

                                                
                                                
#4 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#4 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.3s
#4 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#4 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#4 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#4 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.2s done
#4 DONE 0.8s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:d5889308d1589bb91a2878ca630298179cdb4dbe32a70f96bce87283c211126a
#8 writing image sha256:d5889308d1589bb91a2878ca630298179cdb4dbe32a70f96bce87283c211126a 0.0s done
#8 naming to localhost/my-image:functional-253500 0.0s done
#8 DONE 0.2s
I0127 11:06:06.967012    9524 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-253500 /var/lib/minikube/build/build.2263649592: (3.3867558s)
I0127 11:06:06.979499    9524 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2263649592
I0127 11:06:07.011182    9524 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2263649592.tar
I0127 11:06:07.031302    9524 build_images.go:217] Built localhost/my-image:functional-253500 from C:\Users\jenkins.minikube6\AppData\Local\Temp\build.2263649592.tar
I0127 11:06:07.031443    9524 build_images.go:133] succeeded building to: functional-253500
I0127 11:06:07.031521    9524 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 image ls
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 image ls: (7.00023s)
E0127 11:07:10.499280    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (27.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.1525508s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-253500
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (17.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 image load --daemon kicbase/echo-server:functional-253500 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 image load --daemon kicbase/echo-server:functional-253500 --alsologtostderr: (8.9598744s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 image ls
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 image ls: (8.1868594s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (17.15s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (43.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:499: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-253500 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-253500"
functional_test.go:499: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-253500 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-253500": (28.7972271s)
functional_test.go:522: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-253500 docker-env | Invoke-Expression ; docker images"
functional_test.go:522: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-253500 docker-env | Invoke-Expression ; docker images": (14.4561443s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (43.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (16.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 image load --daemon kicbase/echo-server:functional-253500 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 image load --daemon kicbase/echo-server:functional-253500 --alsologtostderr: (8.6578536s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 image ls
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 image ls: (7.9446139s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (16.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (17.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-253500
functional_test.go:245: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 image load --daemon kicbase/echo-server:functional-253500 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 image load --daemon kicbase/echo-server:functional-253500 --alsologtostderr: (8.5165428s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 image ls
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 image ls: (7.6766123s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (17.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 image save kicbase/echo-server:functional-253500 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 image save kicbase/echo-server:functional-253500 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: (8.1232219s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 update-context --alsologtostderr -v=2: (2.5111188s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 update-context --alsologtostderr -v=2: (2.4985588s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 update-context --alsologtostderr -v=2: (2.4985992s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (15.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 image rm kicbase/echo-server:functional-253500 --alsologtostderr
functional_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 image rm kicbase/echo-server:functional-253500 --alsologtostderr: (8.0423953s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 image ls
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 image ls: (7.4788496s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (15.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (15.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: (8.0372749s)
functional_test.go:451: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 image ls
functional_test.go:451: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 image ls: (7.093648s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (15.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (7.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-253500
functional_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-253500 image save --daemon kicbase/echo-server:functional-253500 --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-windows-amd64.exe -p functional-253500 image save --daemon kicbase/echo-server:functional-253500 --alsologtostderr: (7.306284s)
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-253500
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (7.48s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.2s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-253500
--- PASS: TestFunctional/delete_echo-server_images (0.20s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-253500
--- PASS: TestFunctional/delete_my-image_image (0.09s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-253500
--- PASS: TestFunctional/delete_minikube_cached_images (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (696.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-011400 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0127 11:10:47.429647    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 11:12:03.988646    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 11:12:03.995834    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 11:12:04.008067    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 11:12:04.029934    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 11:12:04.071828    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 11:12:04.153770    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 11:12:04.315776    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 11:12:04.638381    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 11:12:05.280919    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 11:12:06.563533    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 11:12:09.125811    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 11:12:14.248469    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 11:12:24.490583    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 11:12:44.973324    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 11:13:25.935795    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 11:14:47.859782    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 11:15:47.432540    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 11:17:03.991448    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 11:17:31.704392    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-011400 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (11m0.4109446s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 status -v=7 --alsologtostderr: (36.0708673s)
--- PASS: TestMultiControlPlane/serial/StartCluster (696.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (13.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-011400 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-011400 -- rollout status deployment/busybox
E0127 11:20:47.434807    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-011400 -- rollout status deployment/busybox: (5.5248041s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-011400 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-011400 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-011400 -- exec busybox-58667487b6-68jl6 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-011400 -- exec busybox-58667487b6-68jl6 -- nslookup kubernetes.io: (1.8211486s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-011400 -- exec busybox-58667487b6-fzbr5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-011400 -- exec busybox-58667487b6-qwccg -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-011400 -- exec busybox-58667487b6-68jl6 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-011400 -- exec busybox-58667487b6-fzbr5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-011400 -- exec busybox-58667487b6-qwccg -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-011400 -- exec busybox-58667487b6-68jl6 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-011400 -- exec busybox-58667487b6-fzbr5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-011400 -- exec busybox-58667487b6-qwccg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (13.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (257.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-011400 -v=7 --alsologtostderr
E0127 11:23:50.512364    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-011400 -v=7 --alsologtostderr: (3m29.491571s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 status -v=7 --alsologtostderr
E0127 11:25:47.437940    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 status -v=7 --alsologtostderr: (47.678657s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (257.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-011400 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (47.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E0127 11:27:03.997613    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (47.7846068s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (47.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (628.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 status --output json -v=7 --alsologtostderr: (47.8224231s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 cp testdata\cp-test.txt ha-011400:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 cp testdata\cp-test.txt ha-011400:/home/docker/cp-test.txt: (9.6142323s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400 "sudo cat /home/docker/cp-test.txt": (9.4752683s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1959896975\001\cp-test_ha-011400.txt
E0127 11:28:27.074391    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1959896975\001\cp-test_ha-011400.txt: (9.4774063s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400 "sudo cat /home/docker/cp-test.txt": (9.5685051s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400:/home/docker/cp-test.txt ha-011400-m02:/home/docker/cp-test_ha-011400_ha-011400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400:/home/docker/cp-test.txt ha-011400-m02:/home/docker/cp-test_ha-011400_ha-011400-m02.txt: (16.4935436s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400 "sudo cat /home/docker/cp-test.txt": (9.6065701s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m02 "sudo cat /home/docker/cp-test_ha-011400_ha-011400-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m02 "sudo cat /home/docker/cp-test_ha-011400_ha-011400-m02.txt": (9.4598562s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400:/home/docker/cp-test.txt ha-011400-m03:/home/docker/cp-test_ha-011400_ha-011400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400:/home/docker/cp-test.txt ha-011400-m03:/home/docker/cp-test_ha-011400_ha-011400-m03.txt: (16.6347205s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400 "sudo cat /home/docker/cp-test.txt": (9.6012604s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m03 "sudo cat /home/docker/cp-test_ha-011400_ha-011400-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m03 "sudo cat /home/docker/cp-test_ha-011400_ha-011400-m03.txt": (9.4244647s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400:/home/docker/cp-test.txt ha-011400-m04:/home/docker/cp-test_ha-011400_ha-011400-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400:/home/docker/cp-test.txt ha-011400-m04:/home/docker/cp-test_ha-011400_ha-011400-m04.txt: (16.5212702s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400 "sudo cat /home/docker/cp-test.txt": (9.6553494s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m04 "sudo cat /home/docker/cp-test_ha-011400_ha-011400-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m04 "sudo cat /home/docker/cp-test_ha-011400_ha-011400-m04.txt": (9.7543523s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 cp testdata\cp-test.txt ha-011400-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 cp testdata\cp-test.txt ha-011400-m02:/home/docker/cp-test.txt: (9.9376448s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m02 "sudo cat /home/docker/cp-test.txt": (9.5268081s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1959896975\001\cp-test_ha-011400-m02.txt
E0127 11:30:47.440980    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1959896975\001\cp-test_ha-011400-m02.txt: (9.4636711s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m02 "sudo cat /home/docker/cp-test.txt": (9.4444443s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400-m02:/home/docker/cp-test.txt ha-011400:/home/docker/cp-test_ha-011400-m02_ha-011400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400-m02:/home/docker/cp-test.txt ha-011400:/home/docker/cp-test_ha-011400-m02_ha-011400.txt: (16.4812685s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m02 "sudo cat /home/docker/cp-test.txt": (9.5171229s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400 "sudo cat /home/docker/cp-test_ha-011400-m02_ha-011400.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400 "sudo cat /home/docker/cp-test_ha-011400-m02_ha-011400.txt": (9.5358491s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400-m02:/home/docker/cp-test.txt ha-011400-m03:/home/docker/cp-test_ha-011400-m02_ha-011400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400-m02:/home/docker/cp-test.txt ha-011400-m03:/home/docker/cp-test_ha-011400-m02_ha-011400-m03.txt: (16.5499993s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m02 "sudo cat /home/docker/cp-test.txt": (9.556002s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m03 "sudo cat /home/docker/cp-test_ha-011400-m02_ha-011400-m03.txt"
E0127 11:32:04.000990    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m03 "sudo cat /home/docker/cp-test_ha-011400-m02_ha-011400-m03.txt": (9.5300665s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400-m02:/home/docker/cp-test.txt ha-011400-m04:/home/docker/cp-test_ha-011400-m02_ha-011400-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400-m02:/home/docker/cp-test.txt ha-011400-m04:/home/docker/cp-test_ha-011400-m02_ha-011400-m04.txt: (16.6163016s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m02 "sudo cat /home/docker/cp-test.txt": (9.6060515s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m04 "sudo cat /home/docker/cp-test_ha-011400-m02_ha-011400-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m04 "sudo cat /home/docker/cp-test_ha-011400-m02_ha-011400-m04.txt": (9.4726762s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 cp testdata\cp-test.txt ha-011400-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 cp testdata\cp-test.txt ha-011400-m03:/home/docker/cp-test.txt: (9.4535649s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m03 "sudo cat /home/docker/cp-test.txt": (9.5053809s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1959896975\001\cp-test_ha-011400-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1959896975\001\cp-test_ha-011400-m03.txt: (9.5088486s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m03 "sudo cat /home/docker/cp-test.txt": (9.4600783s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400-m03:/home/docker/cp-test.txt ha-011400:/home/docker/cp-test_ha-011400-m03_ha-011400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400-m03:/home/docker/cp-test.txt ha-011400:/home/docker/cp-test_ha-011400-m03_ha-011400.txt: (16.5148411s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m03 "sudo cat /home/docker/cp-test.txt": (9.5861131s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400 "sudo cat /home/docker/cp-test_ha-011400-m03_ha-011400.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400 "sudo cat /home/docker/cp-test_ha-011400-m03_ha-011400.txt": (9.7499407s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400-m03:/home/docker/cp-test.txt ha-011400-m02:/home/docker/cp-test_ha-011400-m03_ha-011400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400-m03:/home/docker/cp-test.txt ha-011400-m02:/home/docker/cp-test_ha-011400-m03_ha-011400-m02.txt: (16.6053099s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m03 "sudo cat /home/docker/cp-test.txt": (9.5474612s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m02 "sudo cat /home/docker/cp-test_ha-011400-m03_ha-011400-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m02 "sudo cat /home/docker/cp-test_ha-011400-m03_ha-011400-m02.txt": (9.4632089s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400-m03:/home/docker/cp-test.txt ha-011400-m04:/home/docker/cp-test_ha-011400-m03_ha-011400-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400-m03:/home/docker/cp-test.txt ha-011400-m04:/home/docker/cp-test_ha-011400-m03_ha-011400-m04.txt: (16.5548917s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m03 "sudo cat /home/docker/cp-test.txt": (9.5678278s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m04 "sudo cat /home/docker/cp-test_ha-011400-m03_ha-011400-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m04 "sudo cat /home/docker/cp-test_ha-011400-m03_ha-011400-m04.txt": (9.5513779s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 cp testdata\cp-test.txt ha-011400-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 cp testdata\cp-test.txt ha-011400-m04:/home/docker/cp-test.txt: (9.4872988s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m04 "sudo cat /home/docker/cp-test.txt": (9.4822053s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1959896975\001\cp-test_ha-011400-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile1959896975\001\cp-test_ha-011400-m04.txt: (9.3501302s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m04 "sudo cat /home/docker/cp-test.txt"
E0127 11:35:47.443993    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m04 "sudo cat /home/docker/cp-test.txt": (9.5859406s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400-m04:/home/docker/cp-test.txt ha-011400:/home/docker/cp-test_ha-011400-m04_ha-011400.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400-m04:/home/docker/cp-test.txt ha-011400:/home/docker/cp-test_ha-011400-m04_ha-011400.txt: (16.7755227s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m04 "sudo cat /home/docker/cp-test.txt": (9.8014195s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400 "sudo cat /home/docker/cp-test_ha-011400-m04_ha-011400.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400 "sudo cat /home/docker/cp-test_ha-011400-m04_ha-011400.txt": (9.471944s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400-m04:/home/docker/cp-test.txt ha-011400-m02:/home/docker/cp-test_ha-011400-m04_ha-011400-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400-m04:/home/docker/cp-test.txt ha-011400-m02:/home/docker/cp-test_ha-011400-m04_ha-011400-m02.txt: (16.3888647s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m04 "sudo cat /home/docker/cp-test.txt": (9.4268132s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m02 "sudo cat /home/docker/cp-test_ha-011400-m04_ha-011400-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m02 "sudo cat /home/docker/cp-test_ha-011400-m04_ha-011400-m02.txt": (9.5178495s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400-m04:/home/docker/cp-test.txt ha-011400-m03:/home/docker/cp-test_ha-011400-m04_ha-011400-m03.txt
E0127 11:37:04.004247    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 cp ha-011400-m04:/home/docker/cp-test.txt ha-011400-m03:/home/docker/cp-test_ha-011400-m04_ha-011400-m03.txt: (16.4682895s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m04 "sudo cat /home/docker/cp-test.txt": (9.495066s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m03 "sudo cat /home/docker/cp-test_ha-011400-m04_ha-011400-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 ssh -n ha-011400-m03 "sudo cat /home/docker/cp-test_ha-011400-m04_ha-011400-m03.txt": (9.510025s)
--- PASS: TestMultiControlPlane/serial/CopyFile (628.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (72.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p ha-011400 node stop m02 -v=7 --alsologtostderr: (34.5306894s)
ha_test.go:371: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-011400 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-011400 status -v=7 --alsologtostderr: exit status 7 (38.0751383s)

                                                
                                                
-- stdout --
	ha-011400
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-011400-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-011400-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-011400-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:38:13.669774   11204 out.go:345] Setting OutFile to fd 1124 ...
	I0127 11:38:13.768126   11204 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:38:13.768126   11204 out.go:358] Setting ErrFile to fd 800...
	I0127 11:38:13.768126   11204 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:38:13.794567   11204 out.go:352] Setting JSON to false
	I0127 11:38:13.795142   11204 mustload.go:65] Loading cluster: ha-011400
	I0127 11:38:13.795533   11204 notify.go:220] Checking for updates...
	I0127 11:38:13.796144   11204 config.go:182] Loaded profile config "ha-011400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 11:38:13.796144   11204 status.go:174] checking status of ha-011400 ...
	I0127 11:38:13.813188   11204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:38:16.096457   11204 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:38:16.096628   11204 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:38:16.096628   11204 status.go:371] ha-011400 host status = "Running" (err=<nil>)
	I0127 11:38:16.096628   11204 host.go:66] Checking if "ha-011400" exists ...
	I0127 11:38:16.097371   11204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:38:18.323723   11204 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:38:18.323723   11204 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:38:18.323924   11204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:38:20.996351   11204 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:38:20.996351   11204 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:38:20.996443   11204 host.go:66] Checking if "ha-011400" exists ...
	I0127 11:38:21.009346   11204 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 11:38:21.009346   11204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400 ).state
	I0127 11:38:23.138286   11204 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:38:23.138286   11204 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:38:23.138286   11204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400 ).networkadapters[0]).ipaddresses[0]
	I0127 11:38:25.850870   11204 main.go:141] libmachine: [stdout =====>] : 172.29.192.249
	
	I0127 11:38:25.850936   11204 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:38:25.850936   11204 sshutil.go:53] new ssh client: &{IP:172.29.192.249 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400\id_rsa Username:docker}
	I0127 11:38:25.948945   11204 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9394565s)
	I0127 11:38:25.960257   11204 ssh_runner.go:195] Run: systemctl --version
	I0127 11:38:25.980705   11204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:38:26.012323   11204 kubeconfig.go:125] found "ha-011400" server: "https://172.29.207.254:8443"
	I0127 11:38:26.012504   11204 api_server.go:166] Checking apiserver status ...
	I0127 11:38:26.024999   11204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:38:26.069633   11204 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2137/cgroup
	W0127 11:38:26.091393   11204 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2137/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 11:38:26.101984   11204 ssh_runner.go:195] Run: ls
	I0127 11:38:26.109677   11204 api_server.go:253] Checking apiserver healthz at https://172.29.207.254:8443/healthz ...
	I0127 11:38:26.120019   11204 api_server.go:279] https://172.29.207.254:8443/healthz returned 200:
	ok
	I0127 11:38:26.120019   11204 status.go:463] ha-011400 apiserver status = Running (err=<nil>)
	I0127 11:38:26.120019   11204 status.go:176] ha-011400 status: &{Name:ha-011400 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:38:26.120019   11204 status.go:174] checking status of ha-011400-m02 ...
	I0127 11:38:26.121609   11204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m02 ).state
	I0127 11:38:28.224813   11204 main.go:141] libmachine: [stdout =====>] : Off
	
	I0127 11:38:28.225207   11204 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:38:28.225207   11204 status.go:371] ha-011400-m02 host status = "Stopped" (err=<nil>)
	I0127 11:38:28.225207   11204 status.go:384] host is not running, skipping remaining checks
	I0127 11:38:28.225207   11204 status.go:176] ha-011400-m02 status: &{Name:ha-011400-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:38:28.225207   11204 status.go:174] checking status of ha-011400-m03 ...
	I0127 11:38:28.226362   11204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:38:30.377270   11204 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:38:30.377270   11204 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:38:30.377270   11204 status.go:371] ha-011400-m03 host status = "Running" (err=<nil>)
	I0127 11:38:30.377270   11204 host.go:66] Checking if "ha-011400-m03" exists ...
	I0127 11:38:30.379010   11204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:38:32.531200   11204 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:38:32.531285   11204 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:38:32.531285   11204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:38:35.035419   11204 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:38:35.035419   11204 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:38:35.035419   11204 host.go:66] Checking if "ha-011400-m03" exists ...
	I0127 11:38:35.047730   11204 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 11:38:35.047730   11204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m03 ).state
	I0127 11:38:37.128327   11204 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:38:37.128432   11204 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:38:37.128501   11204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m03 ).networkadapters[0]).ipaddresses[0]
	I0127 11:38:39.692701   11204 main.go:141] libmachine: [stdout =====>] : 172.29.196.110
	
	I0127 11:38:39.692701   11204 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:38:39.693599   11204 sshutil.go:53] new ssh client: &{IP:172.29.196.110 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m03\id_rsa Username:docker}
	I0127 11:38:39.802110   11204 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7543299s)
	I0127 11:38:39.813668   11204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:38:39.838751   11204 kubeconfig.go:125] found "ha-011400" server: "https://172.29.207.254:8443"
	I0127 11:38:39.838881   11204 api_server.go:166] Checking apiserver status ...
	I0127 11:38:39.849654   11204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 11:38:39.895002   11204 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2325/cgroup
	W0127 11:38:39.913050   11204 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2325/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 11:38:39.922956   11204 ssh_runner.go:195] Run: ls
	I0127 11:38:39.929770   11204 api_server.go:253] Checking apiserver healthz at https://172.29.207.254:8443/healthz ...
	I0127 11:38:39.941580   11204 api_server.go:279] https://172.29.207.254:8443/healthz returned 200:
	ok
	I0127 11:38:39.941580   11204 status.go:463] ha-011400-m03 apiserver status = Running (err=<nil>)
	I0127 11:38:39.941580   11204 status.go:176] ha-011400-m03 status: &{Name:ha-011400-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 11:38:39.941580   11204 status.go:174] checking status of ha-011400-m04 ...
	I0127 11:38:39.942572   11204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m04 ).state
	I0127 11:38:42.055161   11204 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:38:42.055161   11204 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:38:42.055161   11204 status.go:371] ha-011400-m04 host status = "Running" (err=<nil>)
	I0127 11:38:42.055161   11204 host.go:66] Checking if "ha-011400-m04" exists ...
	I0127 11:38:42.056303   11204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m04 ).state
	I0127 11:38:44.183218   11204 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:38:44.184425   11204 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:38:44.184537   11204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m04 ).networkadapters[0]).ipaddresses[0]
	I0127 11:38:46.778228   11204 main.go:141] libmachine: [stdout =====>] : 172.29.200.81
	
	I0127 11:38:46.778228   11204 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:38:46.778228   11204 host.go:66] Checking if "ha-011400-m04" exists ...
	I0127 11:38:46.789792   11204 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 11:38:46.789792   11204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-011400-m04 ).state
	I0127 11:38:48.901608   11204 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 11:38:48.901660   11204 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:38:48.901660   11204 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-011400-m04 ).networkadapters[0]).ipaddresses[0]
	I0127 11:38:51.440767   11204 main.go:141] libmachine: [stdout =====>] : 172.29.200.81
	
	I0127 11:38:51.440767   11204 main.go:141] libmachine: [stderr =====>] : 
	I0127 11:38:51.441300   11204 sshutil.go:53] new ssh client: &{IP:172.29.200.81 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\ha-011400-m04\id_rsa Username:docker}
	I0127 11:38:51.544363   11204 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7545205s)
	I0127 11:38:51.555744   11204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 11:38:51.583892   11204 status.go:176] ha-011400-m04 status: &{Name:ha-011400-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (72.61s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (191.97s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-909200 --driver=hyperv
E0127 11:45:07.087169    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-909200 --driver=hyperv: (3m11.971736s)
--- PASS: TestImageBuild/serial/Setup (191.97s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (10.31s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-909200
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-909200: (10.3081872s)
--- PASS: TestImageBuild/serial/NormalBuild (10.31s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (8.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-909200
E0127 11:45:47.449961    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-909200: (8.745791s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (8.75s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (8.07s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-909200
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-909200: (8.0711875s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (8.07s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (8.07s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-909200
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-909200: (8.0692774s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (8.07s)

                                                
                                    
x
+
TestJSONOutput/start/Command (195.81s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-402400 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0127 11:47:04.009943    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-402400 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m15.8050166s)
--- PASS: TestJSONOutput/start/Command (195.81s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0.03s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.03s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (8.44s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-402400 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-402400 --output=json --user=testUser: (8.4361602s)
--- PASS: TestJSONOutput/pause/Command (8.44s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (8.19s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-402400 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-402400 --output=json --user=testUser: (8.1929015s)
--- PASS: TestJSONOutput/unpause/Command (8.19s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (38.71s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-402400 --output=json --user=testUser
E0127 11:50:47.454203    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-402400 --output=json --user=testUser: (38.7103034s)
--- PASS: TestJSONOutput/stop/Command (38.71s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.7s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-862000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-862000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (270.2893ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0d3e34f5-b614-433b-8325-458db9bf6b56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-862000] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"36ce7cdf-233f-40cc-b99c-73b0a210baa6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube6\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"3ab3529b-7432-4e4e-928e-21c277ddb3e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5e4d74bc-1446-491a-afcf-6ed79fd0dbe7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"0c5bdac0-4f46-45e8-bc1a-264e5a383e91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20318"}}
	{"specversion":"1.0","id":"620b163c-da64-426d-9765-2d9c87353091","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b6c7160b-7999-4a54-b778-f279d3d99d90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-862000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-862000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p json-output-error-862000: (1.427531s)
--- PASS: TestErrorJSONOutput (1.70s)

                                                
                                    
x
+
TestMainNoArgs (0.24s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.24s)

                                                
                                    
x
+
TestMinikubeProfile (520.45s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-454100 --driver=hyperv
E0127 11:52:04.013172    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-454100 --driver=hyperv: (3m9.6737387s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-454100 --driver=hyperv
E0127 11:55:47.457572    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 11:57:04.015997    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 11:57:10.539670    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-454100 --driver=hyperv: (3m12.4379741s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-454100
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (23.5022824s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-454100
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (23.5691164s)
helpers_test.go:175: Cleaning up "second-454100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-454100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-454100: (45.6384652s)
helpers_test.go:175: Cleaning up "first-454100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-454100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-454100: (44.9541194s)
--- PASS: TestMinikubeProfile (520.45s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (150.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-129500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0127 12:00:47.459161    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 12:01:47.100573    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 12:02:04.020062    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-129500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m29.8736385s)
--- PASS: TestMountStart/serial/StartWithMountFirst (150.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.47s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-129500 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-129500 ssh -- ls /minikube-host: (9.4708977s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.47s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (150.1s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-129500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-129500 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m29.0998618s)
--- PASS: TestMountStart/serial/StartWithMountSecond (150.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.72s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-129500 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-129500 ssh -- ls /minikube-host: (9.723709s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.72s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (32.2s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-129500 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-129500 --alsologtostderr -v=5: (32.1969041s)
--- PASS: TestMountStart/serial/DeleteFirst (32.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (9.1s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-129500 ssh -- ls /minikube-host
E0127 12:05:47.463519    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-129500 ssh -- ls /minikube-host: (9.1006542s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (9.10s)

                                                
                                    
x
+
TestMountStart/serial/Stop (30.09s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-129500
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-129500: (30.0850462s)
--- PASS: TestMountStart/serial/Stop (30.09s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (114.79s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-129500
E0127 12:07:04.022319    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-129500: (1m53.7839032s)
--- PASS: TestMountStart/serial/RestartStopped (114.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (9.18s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-129500 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-129500 ssh -- ls /minikube-host: (9.1828596s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (9.18s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (420.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-659000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0127 12:10:47.467017    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 12:12:04.026052    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 12:13:50.552980    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-659000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m37.6769215s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 status --alsologtostderr
E0127 12:15:47.469439    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 status --alsologtostderr: (23.0790899s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (420.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-659000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-659000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-659000 -- rollout status deployment/busybox: (3.4761531s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-659000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-659000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-659000 -- exec busybox-58667487b6-2jq9j -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-659000 -- exec busybox-58667487b6-2jq9j -- nslookup kubernetes.io: (1.8443624s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-659000 -- exec busybox-58667487b6-ktfxc -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-659000 -- exec busybox-58667487b6-2jq9j -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-659000 -- exec busybox-58667487b6-ktfxc -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-659000 -- exec busybox-58667487b6-2jq9j -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-659000 -- exec busybox-58667487b6-ktfxc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.39s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (237.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-659000 -v 3 --alsologtostderr
E0127 12:18:27.112949    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-659000 -v 3 --alsologtostderr: (3m23.0551604s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 status --alsologtostderr
E0127 12:20:47.472032    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 status --alsologtostderr: (34.7388924s)
--- PASS: TestMultiNode/serial/AddNode (237.80s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-659000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.19s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (34.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (34.8105015s)
--- PASS: TestMultiNode/serial/ProfileList (34.81s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (352.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 status --output json --alsologtostderr
E0127 12:22:04.032858    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 status --output json --alsologtostderr: (35.5412171s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 cp testdata\cp-test.txt multinode-659000:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 cp testdata\cp-test.txt multinode-659000:/home/docker/cp-test.txt: (9.2755879s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000 "sudo cat /home/docker/cp-test.txt": (9.1288733s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 cp multinode-659000:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4000448911\001\cp-test_multinode-659000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 cp multinode-659000:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4000448911\001\cp-test_multinode-659000.txt: (9.1045704s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000 "sudo cat /home/docker/cp-test.txt": (9.2461204s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 cp multinode-659000:/home/docker/cp-test.txt multinode-659000-m02:/home/docker/cp-test_multinode-659000_multinode-659000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 cp multinode-659000:/home/docker/cp-test.txt multinode-659000-m02:/home/docker/cp-test_multinode-659000_multinode-659000-m02.txt: (15.8741753s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000 "sudo cat /home/docker/cp-test.txt": (9.3074885s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000-m02 "sudo cat /home/docker/cp-test_multinode-659000_multinode-659000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000-m02 "sudo cat /home/docker/cp-test_multinode-659000_multinode-659000-m02.txt": (9.1865246s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 cp multinode-659000:/home/docker/cp-test.txt multinode-659000-m03:/home/docker/cp-test_multinode-659000_multinode-659000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 cp multinode-659000:/home/docker/cp-test.txt multinode-659000-m03:/home/docker/cp-test_multinode-659000_multinode-659000-m03.txt: (16.0365272s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000 "sudo cat /home/docker/cp-test.txt": (9.105521s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000-m03 "sudo cat /home/docker/cp-test_multinode-659000_multinode-659000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000-m03 "sudo cat /home/docker/cp-test_multinode-659000_multinode-659000-m03.txt": (9.1928982s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 cp testdata\cp-test.txt multinode-659000-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 cp testdata\cp-test.txt multinode-659000-m02:/home/docker/cp-test.txt: (9.1187598s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000-m02 "sudo cat /home/docker/cp-test.txt": (9.2366171s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 cp multinode-659000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4000448911\001\cp-test_multinode-659000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 cp multinode-659000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4000448911\001\cp-test_multinode-659000-m02.txt: (9.2109839s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000-m02 "sudo cat /home/docker/cp-test.txt": (9.1083682s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 cp multinode-659000-m02:/home/docker/cp-test.txt multinode-659000:/home/docker/cp-test_multinode-659000-m02_multinode-659000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 cp multinode-659000-m02:/home/docker/cp-test.txt multinode-659000:/home/docker/cp-test_multinode-659000-m02_multinode-659000.txt: (15.8236091s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000-m02 "sudo cat /home/docker/cp-test.txt": (9.192789s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000 "sudo cat /home/docker/cp-test_multinode-659000-m02_multinode-659000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000 "sudo cat /home/docker/cp-test_multinode-659000-m02_multinode-659000.txt": (9.3418856s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 cp multinode-659000-m02:/home/docker/cp-test.txt multinode-659000-m03:/home/docker/cp-test_multinode-659000-m02_multinode-659000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 cp multinode-659000-m02:/home/docker/cp-test.txt multinode-659000-m03:/home/docker/cp-test_multinode-659000-m02_multinode-659000-m03.txt: (16.0512534s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000-m02 "sudo cat /home/docker/cp-test.txt": (9.1900255s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000-m03 "sudo cat /home/docker/cp-test_multinode-659000-m02_multinode-659000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000-m03 "sudo cat /home/docker/cp-test_multinode-659000-m02_multinode-659000-m03.txt": (9.0752178s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 cp testdata\cp-test.txt multinode-659000-m03:/home/docker/cp-test.txt
E0127 12:25:47.475178    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 cp testdata\cp-test.txt multinode-659000-m03:/home/docker/cp-test.txt: (9.1533991s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000-m03 "sudo cat /home/docker/cp-test.txt": (9.2916799s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 cp multinode-659000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4000448911\001\cp-test_multinode-659000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 cp multinode-659000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube6\AppData\Local\Temp\TestMultiNodeserialCopyFile4000448911\001\cp-test_multinode-659000-m03.txt: (9.1117055s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000-m03 "sudo cat /home/docker/cp-test.txt": (9.1806992s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 cp multinode-659000-m03:/home/docker/cp-test.txt multinode-659000:/home/docker/cp-test_multinode-659000-m03_multinode-659000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 cp multinode-659000-m03:/home/docker/cp-test.txt multinode-659000:/home/docker/cp-test_multinode-659000-m03_multinode-659000.txt: (15.9754247s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000-m03 "sudo cat /home/docker/cp-test.txt": (9.1080688s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000 "sudo cat /home/docker/cp-test_multinode-659000-m03_multinode-659000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000 "sudo cat /home/docker/cp-test_multinode-659000-m03_multinode-659000.txt": (9.1440116s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 cp multinode-659000-m03:/home/docker/cp-test.txt multinode-659000-m02:/home/docker/cp-test_multinode-659000-m03_multinode-659000-m02.txt
E0127 12:27:04.035536    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 cp multinode-659000-m03:/home/docker/cp-test.txt multinode-659000-m02:/home/docker/cp-test_multinode-659000-m03_multinode-659000-m02.txt: (16.329164s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000-m03 "sudo cat /home/docker/cp-test.txt": (9.3066876s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000-m02 "sudo cat /home/docker/cp-test_multinode-659000-m03_multinode-659000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 ssh -n multinode-659000-m02 "sudo cat /home/docker/cp-test_multinode-659000-m03_multinode-659000-m02.txt": (9.2958006s)
--- PASS: TestMultiNode/serial/CopyFile (352.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (74.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 node stop m03: (24.0778932s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-659000 status: exit status 7 (25.5467051s)

                                                
                                                
-- stdout --
	multinode-659000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-659000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-659000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-659000 status --alsologtostderr: exit status 7 (25.2534216s)

                                                
                                                
-- stdout --
	multinode-659000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-659000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-659000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 12:28:22.749337    2644 out.go:345] Setting OutFile to fd 1912 ...
	I0127 12:28:22.821228    2644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:28:22.821228    2644 out.go:358] Setting ErrFile to fd 1028...
	I0127 12:28:22.821228    2644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 12:28:22.834239    2644 out.go:352] Setting JSON to false
	I0127 12:28:22.834239    2644 mustload.go:65] Loading cluster: multinode-659000
	I0127 12:28:22.834239    2644 notify.go:220] Checking for updates...
	I0127 12:28:22.835244    2644 config.go:182] Loaded profile config "multinode-659000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 12:28:22.835244    2644 status.go:174] checking status of multinode-659000 ...
	I0127 12:28:22.836228    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:28:24.947179    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:28:24.947179    2644 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:28:24.947179    2644 status.go:371] multinode-659000 host status = "Running" (err=<nil>)
	I0127 12:28:24.947179    2644 host.go:66] Checking if "multinode-659000" exists ...
	I0127 12:28:24.947922    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:28:27.030412    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:28:27.030489    2644 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:28:27.030581    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:28:29.517759    2644 main.go:141] libmachine: [stdout =====>] : 172.29.204.17
	
	I0127 12:28:29.517759    2644 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:28:29.517759    2644 host.go:66] Checking if "multinode-659000" exists ...
	I0127 12:28:29.535559    2644 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:28:29.535559    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000 ).state
	I0127 12:28:31.658338    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:28:31.658847    2644 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:28:31.659011    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000 ).networkadapters[0]).ipaddresses[0]
	I0127 12:28:34.108442    2644 main.go:141] libmachine: [stdout =====>] : 172.29.204.17
	
	I0127 12:28:34.109492    2644 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:28:34.109647    2644 sshutil.go:53] new ssh client: &{IP:172.29.204.17 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000\id_rsa Username:docker}
	I0127 12:28:34.204887    2644 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.6692789s)
	I0127 12:28:34.219050    2644 ssh_runner.go:195] Run: systemctl --version
	I0127 12:28:34.238810    2644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:28:34.264627    2644 kubeconfig.go:125] found "multinode-659000" server: "https://172.29.204.17:8443"
	I0127 12:28:34.264745    2644 api_server.go:166] Checking apiserver status ...
	I0127 12:28:34.275403    2644 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0127 12:28:34.312184    2644 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2097/cgroup
	W0127 12:28:34.329313    2644 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2097/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0127 12:28:34.340563    2644 ssh_runner.go:195] Run: ls
	I0127 12:28:34.347977    2644 api_server.go:253] Checking apiserver healthz at https://172.29.204.17:8443/healthz ...
	I0127 12:28:34.356627    2644 api_server.go:279] https://172.29.204.17:8443/healthz returned 200:
	ok
	I0127 12:28:34.356721    2644 status.go:463] multinode-659000 apiserver status = Running (err=<nil>)
	I0127 12:28:34.356721    2644 status.go:176] multinode-659000 status: &{Name:multinode-659000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:28:34.356721    2644 status.go:174] checking status of multinode-659000-m02 ...
	I0127 12:28:34.357610    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:28:36.427312    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:28:36.427312    2644 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:28:36.428016    2644 status.go:371] multinode-659000-m02 host status = "Running" (err=<nil>)
	I0127 12:28:36.428016    2644 host.go:66] Checking if "multinode-659000-m02" exists ...
	I0127 12:28:36.428476    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:28:38.512336    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:28:38.512487    2644 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:28:38.512581    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:28:40.970849    2644 main.go:141] libmachine: [stdout =====>] : 172.29.199.129
	
	I0127 12:28:40.970980    2644 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:28:40.971039    2644 host.go:66] Checking if "multinode-659000-m02" exists ...
	I0127 12:28:40.982712    2644 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0127 12:28:40.982712    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m02 ).state
	I0127 12:28:43.048179    2644 main.go:141] libmachine: [stdout =====>] : Running
	
	I0127 12:28:43.048179    2644 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:28:43.048179    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-659000-m02 ).networkadapters[0]).ipaddresses[0]
	I0127 12:28:45.600490    2644 main.go:141] libmachine: [stdout =====>] : 172.29.199.129
	
	I0127 12:28:45.600635    2644 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:28:45.600635    2644 sshutil.go:53] new ssh client: &{IP:172.29.199.129 Port:22 SSHKeyPath:C:\Users\jenkins.minikube6\minikube-integration\.minikube\machines\multinode-659000-m02\id_rsa Username:docker}
	I0127 12:28:45.695854    2644 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.7130927s)
	I0127 12:28:45.707474    2644 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0127 12:28:45.734060    2644 status.go:176] multinode-659000-m02 status: &{Name:multinode-659000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0127 12:28:45.734060    2644 status.go:174] checking status of multinode-659000-m03 ...
	I0127 12:28:45.734879    2644 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-659000-m03 ).state
	I0127 12:28:47.838989    2644 main.go:141] libmachine: [stdout =====>] : Off
	
	I0127 12:28:47.839023    2644 main.go:141] libmachine: [stderr =====>] : 
	I0127 12:28:47.839023    2644 status.go:371] multinode-659000-m03 host status = "Stopped" (err=<nil>)
	I0127 12:28:47.839111    2644 status.go:384] host is not running, skipping remaining checks
	I0127 12:28:47.839111    2644 status.go:176] multinode-659000-m03 status: &{Name:multinode-659000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (74.88s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (192.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 node start m03 -v=7 --alsologtostderr
E0127 12:30:30.565504    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 12:30:47.478585    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 node start m03 -v=7 --alsologtostderr: (2m37.4319613s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-659000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-659000 status -v=7 --alsologtostderr: (34.470763s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (192.11s)

                                                
                                    
x
+
TestPreload (495.74s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-010700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0127 12:42:04.044377    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 12:45:47.487589    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-010700 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (3m58.8305499s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-010700 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-010700 image pull gcr.io/k8s-minikube/busybox: (8.9968911s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-010700
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-010700: (39.164351s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-010700 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0127 12:47:04.049664    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 12:47:10.578422    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-010700 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m40.020168s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-010700 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-010700 image list: (7.1810559s)
helpers_test.go:175: Cleaning up "test-preload-010700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-010700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-010700: (41.5445414s)
--- PASS: TestPreload (495.74s)

                                                
                                    
x
+
TestScheduledStopWindows (320.21s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-436900 --memory=2048 --driver=hyperv
E0127 12:50:47.491082    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 12:51:47.139721    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 12:52:04.051510    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-436900 --memory=2048 --driver=hyperv: (3m8.8464287s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-436900 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-436900 --schedule 5m: (10.167041s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-436900 -n scheduled-stop-436900
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-436900 -n scheduled-stop-436900: exit status 1 (10.0109078s)
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-436900 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-436900 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.2571234s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-436900 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-436900 --schedule 5s: (10.1141792s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-436900
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-436900: exit status 7 (2.3729394s)

                                                
                                                
-- stdout --
	scheduled-stop-436900
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-436900 -n scheduled-stop-436900
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-436900 -n scheduled-stop-436900: exit status 7 (2.3147196s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-436900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-436900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-436900: (27.1207499s)
--- PASS: TestScheduledStopWindows (320.21s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (1024.62s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.4039240250.exe start -p running-upgrade-739200 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.4039240250.exe start -p running-upgrade-739200 --memory=2200 --vm-driver=hyperv: (8m13.0960262s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-739200 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0127 13:03:50.592317    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0127 13:05:47.500177    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-739200 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (7m31.4442888s)
helpers_test.go:175: Cleaning up "running-upgrade-739200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-739200
E0127 13:12:04.063017    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-739200: (1m19.4665936s)
--- PASS: TestRunningBinaryUpgrade (1024.62s)

                                                
                                    
x
+
TestKubernetesUpgrade (1276.41s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-739200 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-739200 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: (5m39.8490309s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-739200
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-739200: (40.1406315s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-739200 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-739200 status --format={{.Host}}: exit status 7 (2.4699955s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-739200 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=hyperv
E0127 13:02:04.057366    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-739200 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=hyperv: (7m51.8961771s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-739200 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-739200 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-739200 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv: exit status 106 (294.1847ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-739200] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-739200
	    minikube start -p kubernetes-upgrade-739200 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7392002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.1, by running:
	    
	    minikube start -p kubernetes-upgrade-739200 --kubernetes-version=v1.32.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-739200 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=hyperv
E0127 13:10:47.503698    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-739200 --memory=2200 --kubernetes-version=v1.32.1 --alsologtostderr -v=1 --driver=hyperv: (6m13.3217683s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-739200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-739200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-739200: (48.2590788s)
--- PASS: TestKubernetesUpgrade (1276.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-739200 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-739200 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (623.0526ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-739200] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (794.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2183243694.exe start -p stopped-upgrade-556900 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2183243694.exe start -p stopped-upgrade-556900 --memory=2200 --vm-driver=hyperv: (5m50.758541s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2183243694.exe -p stopped-upgrade-556900 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube6\AppData\Local\Temp\minikube-v1.26.0.2183243694.exe -p stopped-upgrade-556900 stop: (35.0132009s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-556900 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0127 13:07:04.060763    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-556900 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (6m48.4192281s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (794.19s)

                                                
                                    
x
+
TestPause/serial/Start (379.69s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-918600 --memory=2048 --install-addons=false --wait=all --driver=hyperv
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-918600 --memory=2048 --install-addons=false --wait=all --driver=hyperv: (6m19.6906375s)
--- PASS: TestPause/serial/Start (379.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (9.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-556900
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-556900: (9.340347s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (9.34s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (448.13s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-918600 --alsologtostderr -v=1 --driver=hyperv
E0127 13:20:30.605378    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\addons-226100\\client.crt: The system cannot find the path specified." logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-918600 --alsologtostderr -v=1 --driver=hyperv: (7m28.0917393s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (448.13s)

                                                
                                    
x
+
TestPause/serial/Pause (9.17s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-918600 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-918600 --alsologtostderr -v=5: (9.1647486s)
--- PASS: TestPause/serial/Pause (9.17s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (14.16s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-918600 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-918600 --output=json --layout=cluster: exit status 2 (14.1555974s)

                                                
                                                
-- stdout --
	{"Name":"pause-918600","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-918600","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (14.16s)

                                                
                                    
x
+
TestPause/serial/Unpause (8.4s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-918600 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-918600 --alsologtostderr -v=5: (8.3958376s)
--- PASS: TestPause/serial/Unpause (8.40s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (9.4s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-918600 --alsologtostderr -v=5
E0127 13:27:04.072813    5956 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube6\\minikube-integration\\.minikube\\profiles\\functional-253500\\client.crt: The system cannot find the path specified." logger="UnhandledError"
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-918600 --alsologtostderr -v=5: (9.402181s)
--- PASS: TestPause/serial/PauseAgain (9.40s)

                                                
                                    

Test skip (32/211)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-253500 --alsologtostderr -v=1]
functional_test.go:916: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-253500 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 13244: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-253500 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:974: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-253500 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0118915s)

                                                
                                                
-- stdout --
	* [functional-253500] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:03:04.009140    5852 out.go:345] Setting OutFile to fd 1048 ...
	I0127 11:03:04.103818    5852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:03:04.103818    5852 out.go:358] Setting ErrFile to fd 828...
	I0127 11:03:04.103818    5852 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:03:04.127420    5852 out.go:352] Setting JSON to false
	I0127 11:03:04.131418    5852 start.go:129] hostinfo: {"hostname":"minikube6","uptime":438767,"bootTime":1737537016,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5371 Build 19045.5371","kernelVersion":"10.0.19045.5371 Build 19045.5371","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0127 11:03:04.132416    5852 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0127 11:03:04.137413    5852 out.go:177] * [functional-253500] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	I0127 11:03:04.141410    5852 notify.go:220] Checking for updates...
	I0127 11:03:04.144065    5852 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0127 11:03:04.146537    5852 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:03:04.149476    5852 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0127 11:03:04.151997    5852 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 11:03:04.154975    5852 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:03:04.159741    5852 config.go:182] Loaded profile config "functional-253500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 11:03:04.161850    5852 driver.go:394] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:980: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.01s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-253500 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-253500 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0393112s)

                                                
                                                
-- stdout --
	* [functional-253500] minikube v1.35.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	  - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20318
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	I0127 11:03:00.516484    3876 out.go:345] Setting OutFile to fd 1616 ...
	I0127 11:03:00.591282    3876 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:03:00.591282    3876 out.go:358] Setting ErrFile to fd 1516...
	I0127 11:03:00.591282    3876 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0127 11:03:00.615571    3876 out.go:352] Setting JSON to false
	I0127 11:03:00.618803    3876 start.go:129] hostinfo: {"hostname":"minikube6","uptime":438763,"bootTime":1737537016,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5371 Build 19045.5371","kernelVersion":"10.0.19045.5371 Build 19045.5371","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"be8185f2-ae84-4027-a4e5-684d168fb2f3"}
	W0127 11:03:00.619326    3876 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0127 11:03:00.624738    3876 out.go:177] * [functional-253500] minikube v1.35.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.5371 Build 19045.5371
	I0127 11:03:00.629507    3876 notify.go:220] Checking for updates...
	I0127 11:03:00.632255    3876 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube6\minikube-integration\kubeconfig
	I0127 11:03:00.635118    3876 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0127 11:03:00.639541    3876 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube6\minikube-integration\.minikube
	I0127 11:03:00.642901    3876 out.go:177]   - MINIKUBE_LOCATION=20318
	I0127 11:03:00.646317    3876 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0127 11:03:00.650118    3876 config.go:182] Loaded profile config "functional-253500": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.1
	I0127 11:03:00.651313    3876 driver.go:394] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1025: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard